Next Article in Journal
Quality Assurance of Chest X-ray Images with a Combination of Deep Learning Methods
Previous Article in Journal
Supply Chain Management Maturity and Business Performance: The Balanced Scorecard Perspective
Previous Article in Special Issue
Developing and Preliminary Testing of a Machine Learning-Based Platform for Sales Forecasting Using a Gradient Boosting Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Cyberbullying Patterns in Low Resource Colloquial Roman Urdu Microtext using Natural Language Processing, Machine Learning, and Ensemble Techniques

1
Department of Software Engineering, Institute of Information and Communication Technologies (IICT), Mehran University of Engineering and Technology, Jamshoro 76062, Pakistan
2
Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
3
Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(4), 2062; https://doi.org/10.3390/app13042062
Submission received: 21 January 2023 / Revised: 1 February 2023 / Accepted: 2 February 2023 / Published: 5 February 2023

Abstract

:
Social media platforms have become a substratum for people to enunciate their opinions and ideas across the globe. Due to anonymity preservation and freedom of expression, it is possible to humiliate individuals and groups, disregarding social etiquette online, inevitably proliferating and diversifying the incidents of cyberbullying and cyber hate speech. This intimidating problem has recently sought the attention of researchers and scholars worldwide. Still, the current practices to sift the online content and offset the hatred spread do not go far enough. One factor contributing to this is the recent prevalence of regional languages in social media, the dearth of language resources, and flexible detection approaches, specifically for low-resource languages. In this context, most existing studies are oriented towards traditional resource-rich languages and highlight a huge gap in recently embraced resource-poor languages. One such language currently adopted worldwide and more typically by South Asian users for textual communication on social networks is Roman Urdu. It is derived from Urdu and written using a Left-to-Right pattern and Roman scripting. This language elicits numerous computational challenges while performing natural language preprocessing tasks due to its inflections, derivations, lexical variations, and morphological richness. To alleviate this problem, this research proposes a cyberbullying detection approach for analyzing textual data in the Roman Urdu language based on advanced preprocessing methods, voting-based ensemble techniques, and machine learning algorithms. The study has extracted a vast number of features, including statistical features, word N-Grams, combined n-grams, and BOW model with TFIDF weighting in different experimental settings using GridSearchCV and cross-validation techniques. The detection approach has been designed to tackle users’ textual input by considering user-specific writing styles on social media in a colloquial and non-standard form. The experimental results show that SVM with embedded hybrid N-gram features produced the highest average accuracy of around 83%. Among the ensemble voting-based techniques, XGboost achieved the optimal accuracy of 79%. Both implicit and explicit Roman Urdu instances were evaluated, and the categorization of severity based on prediction probabilities was performed. Time complexity is also analyzed in terms of execution time, indicating that LR, using different parameters and feature combinations, is the fastest algorithm. The results are promising with respect to standard assessment metrics and indicate the feasibility of the proposed approach in cyberbullying detection for the Roman Urdu language.

1. Introduction

Social networking platforms play a seamless and intact role in the contemporary digitized world [1]. The prevalence of information and communication technologies has escalated to the extent that approximately more than half of the human population has a social identity, cumulatively making up more than 3.6 billion people using social media [2]. This immense user base makes these sites a global discussion forum and an effective tool for the exchange of information, textual communication, collaboration, and sharing of knowledge and ideas. The social network equipment creates a global communication hub among numerous communities and cultures for pruning distances and allowing free speech without borders. Since more voices are empowered and shared, this could serve as myriad benefits to society [3]. Undeniably, on the positive side, it provides convenience and promotes global unity, but on the darker side, it has also been reported to widely spread cybercrime and cyberhate [2]. Researchers and scholars worldwide have highlighted the possible perils, hate speech, and cyberbullying dissemination [4]. Cyberbullying (synonymously known as cyber-harassment, cyber-aggression, and hate speech) is an aggressive, intentional act (such as sending unwanted, derogatory, threatening, offensive, embarrassing, or hurtful messages/comments) carried out by a group or individual using digital technologies against a victim who cannot easily protect him or herself” [5]. The United Nations defines the term as follows:
“any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language concerning a person or a group based on who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or another identity factor” [6].
Cyberbullying is a kind of violence accomplished using various electronic forms to threaten an individual’s physical or mental well-being, thus inducing emotional, behavioral, and psychological disorders among its victims. Research studies reported cyberbullying victims to have higher levels of depression, poor self-esteem, and increased social anxiety [7]. The speech freedom feature offered by social platforms has posed risks in numerous ways and is misused by immoral users to elevate hate speech and abusive content. Though, to an extent, adults can manage this peril, children and teenagers are more susceptible to serious mental health issues [8]. Moreover, the COVID-19 pandemic is a double-edged sword [9]. The pandemic caused a considerable surge in online traffic. The drastic shifts in lifestyles and adaptation of new social practices and habits resulted in an upsurge in cyberbullying cases. A report, “COVID-19 and Cyber Harassment” by Digital Rights Foundation (DRF) Pakistan in 2020, highlights a considerable upward shift in cyberbullying and harassment incidents during the COVID-19 epidemic. The complaints registered with DRF’s Cyber Harassment cell were escalated by 189% [10].
This distressing problem of automatic cyberbullying detection, associated with social and ethical challenges, has gained huge research attention in natural language processing and artificial intelligence. Not only is it onerous, but since social networks have become a vital part of individuals’ lives and the consequences of cyberbullying can be appalling, specifically among adolescents, it is also a relevant need [11]. The growing interest of the research community is evident from the recent workshops on cyber social threats such as cySoc 2022, TRAC, and WOAH 2022. Though many social media platforms have established policies for the moderation of the content and blocking or restricting the hateful content, due to the massive scale of generated big data, we need automated data analytic techniques and proactive systems to critically and rigorously investigate cyberbullying and toxic content in real-time.
In this regard, as detailed in the Literature Review section, most of the existing research work is directed toward resource-rich languages. The Roman Urdu language has been recent adoption on social media, specifically in south Asian countries, and is highly resource deficient. Section 3 details the script, morphology, and challenges associated with the Roman Urdu language. This study puts novel efforts into addressing cyberbullying detection research problems in low-resource Roman Urdu language using natural language preprocessing, machine learning, and ensemble techniques. We have used explicitly designed preprocessing methods to handle unstructured micro text. The complications in analyzing the patterns and structure behind implicit and explicit cyberbullying behaviors, typically in newly embraced colloquial languages, and devising it as a comprehensive computational task is quite challenging.
The key objectives of this research work are as follows:
  • Analyzing low-resource colloquial Roman Urdu language text structure and performing exploratory data analysis to visualize data.
  • To devise a mechanism for performing advanced preprocessing techniques on Roman Urdu micro text and systematically applying the appropriate preprocessing phases to improve classification performance. This will also have broad implications over other natural language processing applications.
  • This research proposes an approach based on seven techniques combining ML and ensemble methods with a fusion of a vast number of features, GridsearchCV and experimentation to detect hateful and cyberbullying patterns in natural writing patterns of Roman Urdu text data.
  • To evaluate the performance and time efficiency of the proposed approach and visualize results.
The rest of this paper is structured as follows:
Multilingual research and related works are presented in Section 2. Section 3 states the structure of low resource Roman Urdu language and discusses the associated challenges. Section 4 elaborates the methodology used to accomplish this research work and highlights important phases and steps. Section 5 discusses Roman Urdu corpora. Section 6 details the systematic phases for advanced preprocessing and standardization on Roman Urdu micro text. Section 7 highlights exploratory data analysis to investigate data and understand important patterns and insights. Experimental setup and model hypermeters are specified in Section 8. Section 9 highlights and discusses study results. Section 10 elaborates the estimation of prediction probabilities for implicit and explicit instances. Finally, Section 11 concludes the research work and provides future directions for the research community.

2. Related Work and Multilingual Research

This section presents an extensive survey regarding the techniques and advancements in multilingual hate speech and cyberbullying detection research.
The field of AI has gained enormous research attention worldwide. We understand AI as “machine-learning-based systems with the ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” [1]. Gaining insights from the big data, scholars establish solutions and models that aim to identify toxicity, cyberbullying instances, and hate speech via various techniques and NLP-based tools [12]. AI-based automated cyberbullying detection is gaining enormous attention of researchers, a progress that is also reflected in the scientific literature (such as [13,14,15]). Identifying cyberbullying is undeniably challenging since there are controversies about how cyber hate speech should be defined. The content can be perceived as cyberbullying and hate speech by a group, not by others, based on their individual subjective assessment and respective definitions.
Currently, there are three significant paradigms of analysis, namely, the network analysis, the content analysis, and the fusion of both the network and content analysis. The network analysis usually considers the usage statistic parameter of digital media, such as data of sender and receiver; users’ names, content, location, and time of communication; account IDs; and so on. The content analysis is typically focused on analyzing textual data and using natural language preprocessing techniques involving data acquisition, corpora development, text preprocessing, feature extraction, feature selection, and text-based classification [12].
A systematic review of the related articles on various digital libraries and study of review papers is evident that most of the research on content analysis, usage of computational methods, and ML algorithms for cyberbullying detection was only after 2011 [16]. Research work in [17] by Dinakar et al. on cyberbullying detection from textual data is a highly cited study in the literature and is considered one of the pioneers. They split the task based on the sensitive topics in text-classification sub-problems and extracted textual comments on controversial YouTube videos. The study formed a detection approach based on SVM, naive Bayes, and J48 binary and multiclass models incorporating specific feature sets. They concluded that the performance of binary classifiers outweighs multiclass classifiers. A study in [18] performed at yahoo labs developed a corpus by taking comments posted on yahoo related to news and finance. The data were annotated for abusive language. Amazon’s Mechanical Turk experiment was conducted, and Vowpal Wabbit’s regression model was used. Work conducted in [19] uses a deep learning-based approach for detecting racism and sexism-related hate speech. They trained a convolutional neural network and created word vectors using semantic information and 4 g, which were then downsized for classification. They incorporated 10-fold cross-validation based on word2vec, and the overall f-score was 78.3%. In contrast, a study in [20] presented a time-aware supervised learning model-based profile, session, and different other comment-related features to detect anti-social and bullying instances from data. The model also penalized late detections. Work accomplished by Maral Dadvar et al. in [21] compares supervised models, expert systems, and a hybrid model combining the two. Pawar et al. in [22] employed the distributed computing technique for the detection of cyberbullying instances. Rather than only focusing on detection accuracy, their work also emphasizes robust performance. A study in [23] proposed a weakly supervised approach inferring vocabulary indicators and user roles in harassment. The model uses small seeds of vocabulary for a large corpus to extract bullying traces. Cynthia Van Hee, Gilles Jacobs et al. in [24], suggested a model using social media text written by bullies, victims, and bystanders for automatic cyberbullying detection. They described fine-grained annotation on English and Dutch language corpora. The model was based on a support vector machine algorithm and exploited a rich feature set and information sources that contributed most positively to the detection task. They developed an approach and produced 64% accuracy for the English language and 61% for the Dutch language. Work accomplished in [25] formed a cyberbullying detection algorithm on an unbalanced dataset intended to minimize the cyberbullying alert generation time by reducing the number of feature evaluations. The algorithm design was based on Instagram data and supervised ML techniques.
Several studies have recently been published addressing the English language, and considerable research has been carried out recently in languages such as Urdu [26] and Arabic [27], among others. Work carried out in [28] was an initial attempt towards Turkish language hate speech detection. They formed a corpus of textual comments in the Turkish language, which was unbalanced in nature with respect to abusive and non-abusive contents and performed sentiment annotation. The study implemented CNN, machine learning classifiers, reweighted classifiers, social media features, NGrams, and weighing schemes to improve detection quality. The Arabic language hate speech detection task is addressed in [27]. The study developed an algorithm for the Arabic language based on word vectors, normalizing techniques with NLP, and supervised ML techniques. The corpus used for experimentation was unbalanced. Work suggests limitations in performance, which is approximately 0.30 in different metrics. Offensive and hate speech categorization for Danish language textual data was focused in [29]. The work proposed a detection framework generated mainly using Logistic Regression, Learned-BiLSTM (10 Epochs), Fast-BiLSTM (50 Epochs), and AUX-Fast-BiLSTM (40 Epochs). Different experimental settings could achieve an F1-score of 0.7.
Due to the pervasiveness of social media content and their possible adverse impacts and alarming consequences on human well-being, an enormous number of academic events and shared tasks on linguistic analysis and identification of offensive and hate speech textual data have also taken place globally. Some of them include the GermEval 2021 Shared Task on the identification of toxic, engaging, and fact-claiming comments [30]; The First workshop on trolling, aggression, and cyberbullying (TRAC-2018), which focused on the phenomena of online aggression, trolling, and cyberbullying [31]; OffensEval 2020, which addressed multilingual offensive language identification in social media at an international workshop on semantic evaluation 2020 (SemEval 2020); and EVALITA 2018 [32], a shared task addressing Italian social media data. Communication by masses in regional dialects has become commonplace in the contemporary era. In its colloquial form, Roman Urdu has been widely adopted by Asian communities to share opinions and ideas easily. Recently, a limited number of preliminary research studies have also been contributed to in the Roman Urdu language. Work contributed in [33] is based on a lexicon-based approach and extracted unique words separated in bully and non-bully lexicons and polarity scores to categorize contents. While these approaches work fairly well when the text contains an explicit hate or abusive word, they often pose limitations in detecting implicit ones. Research in [34] proposed a CNN-gram model for offensive language detection. A corpus of almost 10k tweets was developed in RU with both coarse-grained and fine-grained labels. The study is limited in terms of preprocessing techniques, skewed datasets, and colloquial patterns generated by Roman Urdu users. Moreover, the literature suggests deep learning and neural network-based techniques to consume more computational resources, data, and time, thus making them less appealing for real-time applications such as cyberbullying and hate speech detection.
Roman Urdu is a resource-scarce language because of its huge morphological complexity and recent adoption. Because of the inadequacy of resources, despite being a frequently used language globally and in South Asia, only a few efforts have been put forward, as evident from the current scientific literature. The existing systems are also lacking in addressing the challenges such as word variants, naturally occurring highly unbalanced Roman Urdu content on social media, irregular use of capitalization, domain-specific standardized preprocessing methods, and so on. Motivated from this, we address the limitations by considering inherent writing patterns of colloquial Roman Urdu language users and grammatical structure variabilities, focused on novel preprocessing techniques for free text forms. The study has used a comparatively balanced Roman Urdu corpus hand annotated by linguistic experts and validated using Kohen’s kappa statistic. We have developed machine learning and ensemble-based models over many features. Instead of merely considering accuracy, we checked the performance of models and evaluated their time complexity. The performance of models is then compared for efficiency and time complexity.

3. Roman Urdu Language Structure and Challenges

In the recent era, social networks and weblogs publish a massive variety of content and constitute an indispensable hub of information mined for the research community. The unstructured big data (aka eData) produced by these platforms highly deviate in the context of standard dialect, grammatical rules, and lexicons [35]. These idiolects produce non-standard words in the language lexicon, phonological variations, and syntactic and grammatical variations, thus distinguishing the textual social media form of the language from its corresponding standard counterpart.
Recently, on different social media platforms, the Roman Urdu language has been adopted as a contemporary trend and feasible medium for communication. It originated from the Urdu language, a morphologically rich language with a complex inflectional system. The Urdu language is considered to be the 21st most widely spoken language worldwide and is also called the “Lashkari language” (لشکری زبان) [36]. It is the national and official language of Pakistan, i.e., “Qaumi Zaban,” and is primarily spoken across different regions and communities [37]. While Urdu is written in Nastaliq script, Roman Urdu language is written via LTR (Left-to-Right) writing and is based on Roman scripting. A survey statistic presented in [38] states that there are 300 million people across the globe who speak Urdu language and nearly 11 million Urdu users are in Pakistan, from which the maximum users on social media has recently shifted to informal, colloquial Roman Urdu language for the textual messaging. Roman Urdu is a linguistically rich language and highly differs in the aspect of word structures, irregularities arising from natural writing patterns, and grammatical compositions adopted by users as compared to formal Urdu. There are no diacritics marks (‘zer’, ‘zabar’, ‘pesh’, etc.) in roman Urdu. It is a deficit of standard lexicon and available resources and hence becomes extremely problematic when performing NLP tasks.
A clear elaboration of Roman Urdu LTR pattern script and Urdu RTL scripting is depicted in Table 1.
The lack of rules and colloquial adoption of this language by social media users also results in a much higher word surface. For example, the word ‘good’ in English is written as ‘acha’ in Urdu in masculine form, feminine form, and singular and plural forms. However, the number of words increases many-fold when written in the Roman Urdu language such as ‘acha’ (masculine form), ‘achi’ (feminine form), achay (plural form), and so on. Moreover, people use their own spellings, writing structures, and elongated characters.
Although the Urdu language has a formal structure and the availability of few resources, colloquial Roman Urdu has no standard and formal structure, and no resources are available to standardize it. The scarcity and lack of language resources make it challenging to analyze and apply data analytics techniques for deducing useful knowledge from text. To cope with this, rigorous language resources, language-specific preprocessing steps, techniques, and well-devised methodologies are needed.

4. Proposed Approach for Roman Urdu Cyberbullying Detection

Figure 1 depicts the methodology used to accomplish this research work and highlights important phases and steps. The steps are further detailed in subsequent sections.

5. Colloquial Roman Urdu Corpus

The design of automated systems for the detection of cyberbullying is a non-trivial research problem. To address this problem, annotated corpora are one of the key resources. For a morphologically rich and complex language such as Roman Urdu, despite being spoken by millions of people worldwide, the paucity of available resources and corpora is a major reason for the lack of improvement and advancement in research. Moreover, specifically for low-resource regional languages, getting the right amount of data that is equitable in nature is even harder [39].
To accomplish this research work, we have developed Roman Urdu corpora with minor skew (published and detailed in our previous work [40]). The Roman Urdu corpora were developed using a vast number of queries, extraction mechanisms, and bullying lexicon. The colloquial natural writing patterns of users were considered while extracting the data. Generally, the non-cyberbullying data content is ordinarily oversized on social media, hence the dataset was developed in multiple phases. The resulting data are relatively balanced in nature with cyberbullying content from different categories. The dataset is hand-annotated by linguistic experts and verified for data quality using Kohen’s kappa. Some of the sample instances highlighting profanity and category elaboration are depicted in Table 2.

6. Data Preprocessing and Standardization

Preprocessing of data is among the essential phases in virtually all text-based tasks. It is immensely mandatory to transform the low-quality unstructured micro text into good-quality structured text before retrieval of essential information and detection patterns. The social media micro text is usually written in a free form, leaving behind the language norms and standards. This makes the data inappropriate for text-based tasks. Data contain a vast body of extraneous and noisy elements which need to be removed to bring the data into a required format. The applicability of preprocessing techniques ensures that the data are clean, erroneous free, and meaningful. The steps of preprocessing and NLP resources required to accomplish preprocessing phases are, however, different from one task to another or one language to another [41,42]. The major data preprocessing steps applied on informal colloquial Roman Urdu micro text are depicted in Figure 2.

6.1. Data Cleansing and Noise Removal

Several preprocessing techniques have been used to reduce noise from Roman Urdu micro text. The process included the removal of punctuations and special characters, digits removal, folding of text to lower case, removal of hashtags, extra whitespaces, and user mentions. Missing rows in Roman Urdu and encoded text format in the corpora were also handled. During punctuation removal, major punctuations (’!"#$%&’()*+,-./:; >=<?@ [] –} {’) were removed except for dot. Dot was replaced by space since it is used as a sentence delimiter and may mostly combine two otherwise separate tokens in colloquial writing patterns.
To bring uniformity in the text, the data were converted to lowercase. This helped in downsizing the huge dimensional space of textual data by treating the two terms with different cases in a similar manner. The rows with missing labels were dropped. Finally, we encoded the data, which included emojis, special symbols, and other common stray characters, using the Unicode transformation type-8 encoding. The Python re and string modules were used to convert and manage this data.

6.2. Text Tokenization

Text tokenization is the process of splitting a piece of text into smaller manageable units known as tokens. The tokenization process, applied on tweet t, results in a sequence of tokens tx = (t1, t2, …tk), where x denotes the number of a tweet, and k represents the number of tokens. The sample output on Roman Urdu micro text after tokenization is shown in Figure 3.

6.3. Handling Elongated Characters and Variants in Colloquial Roman Urdu

Textual data in informal languages (such as colloquial Roman Urdu) presents unique orthographic and syntactic constraints as compared to their standard counterparts. One of the crucial problems of such data is lexically variant words, which can highly impact the accuracy of classification models and intermediary natural language processing tools and steps [35]. Colloquial Roman Urdu users typically use elongated word variants. Some example instances comprising elongated word variants and their standard Urdu transliterations are shown in Table 3.
Such words were normalized to avoid Out-of-Vocabulary (OOV) words, reduce word base, eliminate multiple word representations for the same words, and dimensionality reduction of the corpus. The example output of variant normalization is highlighted in Figure 4.

6.4. Elimination of Stop Words

Stop words are the units of text that carry the least semantic information and are non-informative components of the data. They are usually removed since they do not add any value to the analysis. These are the high-frequency words in a sentence that, if added as a feature in text classification, would have produced noise [43]. The larger word surface in textual data also poses the problem of the curse of dimensionality and data sparsity. Therefore, stop words were eliminated from corpora using a domain-specific stop word list. The list was developed from Roman Urdu hate speech corpora using statistical techniques and human evaluation by linguistic experts. The statistical techniques involved direct term frequency (TF), inverse-document frequency (IDF), and term-frequency-inverse-document-frequency (TFIDF) weighting models. The list was then evaluated by bilingual experts and the final list comprises 173 words. The detailed method of domain-specific stop word list compilation is given in our previous work [40]. The sample stop word list is given in Table 4.
The output of stop word removal on Roman Urdu text is presented in Figure 5.

6.5. Expanding Slangs and Contractions

Contractions are short words that are written more casually and are frequently utilized by online users to cut down on character limits. Contractions and slangs are usually vowel-less weak-form words. Most of the Roman Urdu speech on social media consists of bully terms being used as slangs. High-dimensional textual data also suppress significant features. Therefore, contraction mapping is essential for dimensionality reduction, standardization of the text, and later for capturing the intricate bullying patterns. Contractions and slangs were replaced with their expanded variants using the slang lexicon created and detailed in our previous work [37]. The lexicon is stored using a python dictionary object consisting of key-value pairs. The sample Roman Urdu slangs and its extended terms are shown in Figure 6 where sensitive hate words are hidden.
In this mapping dictionary, each key is the slang term commonly used in colloquial Roman Urdu communication, and the value is its equivalent Roman Urdu phrase or a term such as "DYK": “App ko pata hai”, "ASAP": “Jitna jaldi ho sakay”, "idc": “mujhay parwah nahen” and so on. The process of slang mapping traverses the key values in the Roman Urdu corpus and appends them with the corresponding value until the end of the file is reached.

7. Exploratory Data Analysis on Roman Urdu Corpus

Exploratory data analysis (EDA) is a vital process for investigating data to gain important patterns and insights. This approach helps to analyze and understand the data in a better manner via statistical and graphical methods. EDA serves as a baseline for the data cleaning and preprocessing step. It is a crucial technique to intelligently proceed with the subsequent steps in the entire process of machine learning [44].
The Bag of Words (BoW) model was created for cyberbullying and non-cyberbullying instances. The top 100 occurring tokens, except for commonly occurring domain-specific stop words (since they carry the least semantic meaning), were used to devise a word frequency cloud. Figure 7 highlights the frequency distribution of words in each category. It can be observed that the usage of specific swear words is minimal in hate speech content since most of the uttered instances are implicit in nature. The text content analysis was performed on the Roman Urdu corpus to look at the data before making any assumptions.
The word count distribution plot is shown in Figure 8 and Figure 9. The graph displays tweet length in the context of word counts vs. frequency percentage of a number of tweets for cyberbullying and non-cyberbullying instances. It can be visualized that the average length of the majority of hate speech tweets is compressed and compact as compared to non-hate speech. Moreover, since most of the hate speech instances are implicit, this makes it even more challenging to identify features and do feature engineering to identify such patterns.
The average word length distribution for hate speech and non-hate speech instances is highlighted in Figure 10 and Figure 11, respectively. It shows a similar pattern, i.e., for hate speech, users use average-length tokens; hence, the character count lies on the left-hand side and centre of the graphs. Whereas for non-hate speech, the tokens’ length and character count are larger.

8. Experimental Setup

This section details the proposed models, ensemble techniques, and feature engineering for Roman Urdu cyberbullying detection. To clean and prepare the data for the training phase, the tweets from the Roman Urdu corpus were processed via different language-specific preprocessing phases detailed earlier in Section 4. Then, various classification models and ensemble techniques were investigated. The experiments and simulations for this research work were carried out on 11 Gen, core i7, with 4 cores, 8 logical microprocessors, 2.8 GHz processor speed, 256 GB Solid State Drive, and python version 3.8, 64 bits.
The models were developed and trained mainly in Scikit-learn and XGBoost apart from numerous other packages. Scikit learn is developed on top of SciPy, NumPy, and matplotlib. It is a robust package providing important and efficient tools for machine learning and statistical learning. XGBoost is an optimized distributed gradient boosting library developed to be highly adaptable, efficient, and portable. The PyCharm IDE was used to accomplish all the implementations. The optimality in results and parameters was achieved via repeated experimentation and GridSearchCV.
The setup involved experimenting with several techniques. Precisely, we investigated the efficiency and performance along with the time complexity for Multinomial Naive Bayes, SVM, Logistic regression, Decision Tree, AdaBoost, XGBoost, and Bagging classifiers. The ensemble learning mechanism combines individual models to improve predictive power and stability. The results were derived based on the vast number of n-grams, n-gram combinations, and statistical features extracted from Roman Urdu preprocessed textual comments. The five-fold cross-validation method is used to evaluate and analyze the performance. The results are elaborated on in the evaluation methods and results section.

Feature Engineering

Text data are inherently highly dimensional in nature. The curse of the multidimensionality issue arises when ML techniques are implemented on high-dimensional data, thus resulting in sparsity, which impacts the classifier’s predictive accuracy. Identifying and categorizing learning patterns from voluminous data is extremely challenging and computationally expensive since repetitive and inappropriate features impede performance in text classification problems. Moreover, In the Roman Urdu language, due to huge lexical variations (refer to Section 3), a single word gives rise to many dimensions making it more problematic. Extracting significant and relevant features is mandatory to cope with this curse of dimensionality.
The text vectorization was carried out using Count vectorizer and Tfidf techniques after the removal of domain-specific stop words. Count Vectorizer transforms the text document collection into a matrix of integers. It yields a sparse representation using the scipy.sparse.csr_matrix in python. TF–IDF (Term frequency–Inverse document frequency) is an algorithm based on word statistics for text feature extraction. It is intended to reflect the significance of a term in the corpus or the collection. Mathematically, it is described by Equation (1).
w ( d , t ) = T F ( d , t ) l o g N d f ( t )
where w(d, t) is the TF–IDF of the weight of a term t in a document N represents the number of documents and df(t) highlights the number of documents in the corpus containing the term t. In the above-given equation, the first term enhances the recall, whereas the second term enhances the word embedding accuracy [45].
Apart from statistical features, the word-level n-gram features are used in this study, i.e., unigrams, bigrams, trigrams, and hybrid features, i.e., uni-bigram (unigram + bigram), uni + trigrams (unigrams + trigrams) and bi-trigrams (bigrams + trigrams). They are used in combination with countvectorizer and tfidf. Consider a Roman Urdu text: “Wo tha bhi itna bonga aadmi”. This will generate the sequence of unigrams + bigrams as: (‘wo’), (‘tha’), (‘bhi’), (‘itna’), (‘bonga’), (‘aadmi’), (`wo’, `tha’), (`tha’, `bhi’), (`bhi’, `itna’), (`itna’, `bonga’), (‘bonga’, `aadmi’). The drawn features were also used in the form of feature sets.

9. Evaluation Methods and Results

This section provides the results of the performance and efficiency of the proposed models, which were examined using a series of experiments and various evaluation parameters via GridSearchCV. We incorporated five-fold cross-validation and averaged the results. In this work, machine learning and ensemble algorithms were constructed, namely SVM, Multinomial NB, LR, DT, AdaBoost, XGBoost, and Bagging classifier. This study performed the analysis using the standard assessment and evaluation metrics, including accuracy, the area under the receiver operating characteristic curve (AUC) curve, Precision, Recall, precision–recall curve, and the F1-score. Accuracy (A) is defined as the proportion of the total number of correct predictions. It can be calculated as the ratio of correctly classified instances to the total number of instances and can be computed as given in Equation (2).
A c c u r a c y = ( T P + T N ) ( T P + T N + F P + F N ) )
Precision (P) is the measure of the exactness of the model results. It is the ratio of the number of examples correctly labeled as positive to the total number of positively classified examples. It is given in Equation (3).
P r e c i s i o n = T P ( T P + F P )
Recall (R) is the overall coverage of the model. It measures the completeness of the classifier results. Recall shows how many of the total instances are correctly classified as cyberbullying instances. It can be identified as in Equation (4).
R e c a l l = T P ( T P + F N )
F1 score is the weighted average of precision and recall. It is a harmonic mean of precision and recall. Usually, the F1 score is more beneficial than accuracy when even the minor class distribution imbalance occurs. Mathematically, it can be computed as given in Equation (5).
F 1 S c o r e = 2 × ( P r e c i s i o n × R e c a l l ) ( P r e c i s i o n + R e c a l l )
The results for mean accuracy scores (including standard deviation) of algorithms based on extracted feature vectors are depicted in Table 5.
All of the significant results are highlighted in bold. It can be observed that the SVM model outperforms with the highest accuracy score of 83% on tfidf and unigrams and combined N-grams, i.e., unigrams and trigrams followed by the LR model. LR produces an accuracy of 80% on tfidf and unigrams closely followed by combined Ngrams with 79% accuracy level. Voting-based ensemble models Adaboost, XGboost, and bagging are unique in this study and we have validated the prediction of tweet labels by achieving the optimal accuracy level of 70%, 79%, and 78%, respectively. These levels were reached typically over tfidf and unigrams, and a combination of unigrams and bigrams.
The performance evaluation of models based on precision (P), recall (R), and F-score (F1), including standard deviation, is presented in Table 6. Table 6 shows the mean results of each technique for the highest accuracy levels and feature parameters. Considering the F1-score (for both cyberbullying and non-cyberbullying scenarios), SVM, LR, and multinomial naïve Bayes and among ensemble techniques, XGboost and bagging were the best-performing algorithms.
We use the precision–recall (PR) curve as the visual representation for all the experimentations as presented in Figure 12. For each potential cut-off, the connection between precision (positive predictive value) and recall (sensitivity) is defined by a precision–recall curve. It illustrates the trade-off between recall and precision for various thresholds. Among the machine learning methods, over a cyberbullying class, SVM with the combination of unigrams and TFIDF feature set depicts the best precision–recall area.
We also visualized the performance of the proposed models using the AUC metric, the area under the receiver operating characteristics (ROC) curve. It is a metric used to assess a classification model’s performance at different thresholds. In binary classification, the thresholds are various probability cutoffs that separate the two classes. It employs probability to use the model’s capacity to distinguish between classes. Figure 13 presents the AUC curve of the proposed models for the Roman Urdu dataset at hybrid N-Grams and tfidf unigrams. We can observe from the figure that the proposed models show significantly optimal results over SMV and LR. It also highlights better results on the ensemble bagging classifiers over tfidf and unigrams.
Bag-of-words (BOW) features were extracted from Roman Urdu data using Count Vectorizer. The results for mean accuracy, precision, recall, and F1-score, including standard deviation, specifically over the cyberbullying class are highlighted in Table 7. Most significant results are bold. It can be observed that the LR algorithm produced optimal accuracy of 82% and an f1-score of 79% over cyberbullying class among ML techniques, whereas, from ensemble techniques, the Bagging ensemble algorithm provided better performance with an accuracy score of 78% and F1-score of 72%.
To extract more significant conclusions, we plotted the classifier results when training and testing over the BoW approach. The summary of model comparison over various evaluation metrics is presented in Figure 14.

Time Complexity

The time complexity of algorithms in terms of average execution time (training and testing time) is illustrated in Table 8. Results in Table 8 indicate that LR was the fastest algorithm to execute in overall experimentation and has achieved the best training and prediction time of 0.06 s, closely followed by multinomial naïve Bayes and Adaboost with 0.156 and 1.63 s, respectively. Whereas SVM consumed the worst time of 35.1 s for training and prediction. However, SVM also outperforms in terms of accuracy, compared on all the models implemented and experimented in this research. The ensemble method XGboost took the second-longest execution time, followed by the Bagging classifier.

10. Prediction Probabilities

The estimation of prediction probabilities was computed and analyzed for both implicit and explicit Roman Urdu instances. Figure 15 presents an implicit cyberbullying instance (“naak to ajeeb surhai jaise hai tumhare”); expressions targeting the physical appearance of an addressee with the intent of insulting them. The instance comprises seven unique unigrams with three domain-specific stop words, i.e., ‘to’, ‘jaise’, and ‘hai’, which were eliminated during the data preprocessing phase. There are no explicit hate or cyberbullying tokens as can be seen in the tree structure on the right-hand side of the diagram. The results contribute to the overall prediction probability of 0.90 and 0.1 for the cyberbullying and non-cyberbullying classes, respectively, thus classifying the instance as a cyberbullying instance. The experimentation was performed using SMV at tfidf and a unigram combination of feature sets.
Figure 16 presents a classification of an explicit cyberbullying instance using the same experimental settings, parameters, and model. The explicit cyberbullying instance (“banda 2 books parh le kamskam jutt aur jahil nahen hoga tumhare tarah”) is an expression of hate and insult using the conduct of explicit words to target the intelligence of an address. The distribution of cyberbullying and non-cyberbullying tokens in the example tweet is depicted in a tree structure on the right-hand side of the diagram. We can observe that swear words such as “jutt” and “jahil” are explicitly used to target the person and contributes to the overall prediction probability of 0.8 and 0.2 for cyberbullying and non-cyberbullying classes, respectively.

11. Conclusions and Future Work

This research work puts novel efforts to address cyberbullying detection problems for both implicit and explicit patterns on social media using Roman Urdu corpora comprising colloquial natural writing styles of users. Our findings contribute to analyzing and understanding colloquial patterns of low-resource Roman Urdu language and associated challenges. The study established advanced preprocessing techniques, typically developed for Roman Urdu corpora to enhance the efficiency and performance of the proposed techniques. We conducted vast experiments using seven text categorization approaches based on machine learning models and ensemble techniques and compared the performances using standard assessment metrics and visualization methods. All the experiments were carried out using Python’s Pycharm IDE. Most suitable parameter values were identified using Grid search. Among ML-based text categorization, SVM with a fusion of Ngrams and Tfidf produced the most satisfactory results. In ensemble techniques, extreme gradient boosting (XGBoost) yields optimal performance with an accuracy of 78%, closely followed by the bagging technique with an accuracy score of 78%, though with relatively greater time complexity as compared to adaptive boosting (Adaboost). The time complexity of algorithms, comprising training and testing durations, was also assessed. Time evaluations indicate that LR produced the optimal execution time of 0.06 s followed by multinomial NB and Adaboost ensemble technique over a fusion of Ngrams. The probability estimations were identified for different scenarios. The major contribution of this research work is the achievement of promising results in the identification of implicit and explicit cyberbullying patterns over both classes in colloquial Roman Urdu text via vast experimentation on machine learning, voting-based ensemble methods, and hybrid features. The models proposed in this research can be implemented and embedded as social media filters to prevent or at least reduce the harassment and bullying instances in the Roman Urdu language that can cause anxiety, depression, emotional changes, and even in some cases may end up in deadly consequences. Additionally, it will help cybercrime investigative teams and centres to monitor social media content and make the internet a secure and safer place for all facets of society.
In the future, we aim to investigate the role of user meta-information-related features in detecting cyberbullying patterns. We also plan to develop a web application for the demonstration of the applicability of the proposed models, allowing classifying Roman Urdu tweets and comments both in implicit and explicit forms, evaluating the implemented models and receiving user’s feedback on the prediction of the models.

Author Contributions

Conceptualization, A.D., M.A.M., S.B. and A.S. (Adel Sulaiman); methodology, A.D., M.A.M., S.B., M.H., H.A., A.A. and A.S. (Asadullah Shaikh); software, A.D., M.A.M., S.B. and A.S. (Adel Sulaiman); validation, A.D., M.H., H.A., A.A. and A.S. (Asadullah Shaikh); formal analysis, A.D. and M.A.M.; investigation, A.D., M.A.M., S.B. and A.S. (Adel Sulaiman); resources, M.H., H.A. and A.A.; data curation, A.D., M.H. and M.A.M.; writing—original draft preparation, A.D., M.A.M., S.B. and A.S. (Adel Sulaiman); writing—review and editing, M.H., H.A., A.A. and A.S. (Asadullah Shaikh); visualization, A.S. (Adel Sulaiman); supervision, M.A.M., S.B. and A.S. (Asadullah Shaikh); project administration, M.A.M., S.B., H.A. and A.A.; funding acquisition, A.S. (Adel Sulaiman). All authors have read and agreed to the published version of the manuscript.

Funding

The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Collaboration Funding program grant code (NU/RC/SERC/11/7).

Data Availability Statement

The data that supports the findings of this research study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meske, C.; Bunde, E. Design principles for user interfaces in AI-Based decision support systems: The case of explainable hate speech detection. Inf. Syst. Front. 2022, 1–31. [Google Scholar] [CrossRef]
  2. Sharma, A.; Kabra, A.; Jain, M. Ceasing hate with MoH: Hate Speech Detection in Hindi–English code-switched language. Inf. Process. Manag. 2022, 59, 102760. [Google Scholar] [CrossRef]
  3. Vrysis, L.; Vryzas, N.; Kotsakis, R.; Saridou, T.; Matsiola, M.; Veglis, A.; Arcila-Calderón, C.; Dimoulas, C. A web interface for analyzing hate speech. Future Internet 2021, 13, 80. [Google Scholar] [CrossRef]
  4. Celik, S. Experiences of internet users regarding cyberhate. Inf. Technol. People 2019, 32, 1446–1471. [Google Scholar] [CrossRef]
  5. Giumetti, G.W.; Kowalski, R.M. Cyberbullying via social media and well-being. Curr. Opin. Psychol. 2022, 45, 101314. [Google Scholar] [CrossRef]
  6. Nations, U. United Nations: Understanding Hate Speech. 2021. Available online: https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech (accessed on 30 October 2022).
  7. Wang, S.; Kim, K.J. Effects of victimization experience, gender, and empathic distress on bystanders’ intervening behavior in cyberbullying. Soc. Sci. J. 2021, 1–10. [Google Scholar] [CrossRef]
  8. Nelatoori, K.B.; Kommanti, H.B. Multi-task learning for toxic comment classification and rationale extraction. J. Intell. Inf. Syst. 2022, 1–25. [Google Scholar] [CrossRef]
  9. Kee, D.M.H.; Al-Anesi, M.A.L.; Al-Anesi, S.A.L. Cyberbullying on social media under the influence of COVID-19. Glob. Bus. Organ. Excell. 2022, 41, 11–22. [Google Scholar] [CrossRef]
  10. Ahmed, I. Cyber Bullying Doubles during Pandemic. June 2020. Available online: https://www.thenews.com.pk/tns/detail/671918-cyber-bullying-doubles-during-pandemic (accessed on 30 October 2022).
  11. Rosa, H.; Pereira, N.; Ribeiro, R.; Ferreira, P.C.; Carvalho, J.P.; Oliveira, S.; Coheur, L.; Paulino, P.; Simão, A.V.; Trancoso, I. Automatic cyberbullying detection: A systematic review. Comput. Hum. Behav. 2019, 93, 333–345. [Google Scholar] [CrossRef]
  12. Xu, Y. The invisible aggressive fist: Features of cyberbullying language in China. Int. J. Semiot. Law Rev. Int. Sémiotique Jurid. 2021, 34, 1041–1064. [Google Scholar] [CrossRef]
  13. Ayo, F.E.; Folorunso, O.; Ibharalu, F.T.; Osinuga, I.A. Machine learning techniques for hate speech classification of twitter data: State-of-the-art, future challenges and research directions. Comput. Sci. Rev. 2020, 38, 100311. [Google Scholar] [CrossRef]
  14. Fortuna, P.; Nunes, S. A survey on automatic detection of hate speech in text. Acm Comput. Surv. CSUR 2018, 51, 1–30. [Google Scholar] [CrossRef]
  15. MacAvaney, S.; Yao, H.R.; Yang, E.; Russell, K.; Goharian, N.; Frieder, O. Hate speech detection: Challenges and solutions. PLoS ONE 2019, 14, e0221152. [Google Scholar] [CrossRef]
  16. Tahmasbi, N.; Fuchsberger, A. Challenges and future directions of automated cyberbullying detection. In Proceedings of the 24th Americas Conference on Information Systems 2018: Digital Disruption, AMCIS 2018, New Orleans, LA, USA, 16–18 August 2018. [Google Scholar]
  17. Dinakar, K.; Reichart, R.; Lieberman, H. Modeling the detection of textual cyberbullying. In Proceedings of the International AAAI Conference on Web and Social Media, Barcelona, Spain, 17–21 July 2011; Volume 5, pp. 11–17. [Google Scholar]
  18. Nobata, C.; Tetreault, J.; Thomas, A.; Mehdad, Y.; Chang, Y. Abusive language detection in online user content. In Proceedings of the 25th International Conference on World Wide Web, Montreal, QC, Canada, 11–15 April 2016; pp. 145–153. [Google Scholar]
  19. Gambäck, B.; Sikdar, U.K. Using convolutional neural networks to classify hate-speech. In Proceedings of the First Workshop on Abusive Language Online, Vancouver, BC, Canada, 4 August 2017; pp. 85–90. [Google Scholar]
  20. López-Vizcaíno, M.F.; Nóvoa, F.J.; Carneiro, V.; Cacheda, F. Early detection of cyberbullying on social media networks. Future Gener. Comput. Syst. 2021, 118, 219–229. [Google Scholar] [CrossRef]
  21. Dadvar, M.; Trieschnigg, D.; Jong, F.D. Experts and machines against bullies: A hybrid approach to detect cyberbullies. In Proceedings of the Canadian Conference on Artificial Intelligence, Montreal, QC, Canada, 6–9 May 2014; pp. 275–281. [Google Scholar]
  22. Pawar, R.; Agrawal, Y.; Joshi, A.; Gorrepati, R.; Raje, R.R. Cyberbullying Detection System with Multiple Server Configurations. In Proceedings of the 2018 IEEE International Conference on Electro/Information Technology (EIT), Rochester, MI, USA, 3–5 May 2018; pp. 90–95. [Google Scholar]
  23. Raisi, E.; Huang, B. Cyberbullying detection with weakly supervised machine learning. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, Sydney, NSW, Australia, 31 July–3 August 2017; pp. 409–416. [Google Scholar]
  24. Van Hee, C.; Jacobs, G.; Emmery, C.; Desmet, B.; Lefever, E.; Verhoeven, B.; De Pauw, G.; Daelemans, W.; Hoste, V. Automatic detection of cyberbullying in social media text. PLoS ONE 2018, 13, e0203794. [Google Scholar] [CrossRef]
  25. Yao, M.; Chelmis, C.; Zois, D.S. Cyberbullying detection on instagram with optimal online feature selection. In Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Barcelona, Spain, 28–31 August 2018; pp. 401–408. [Google Scholar]
  26. Akram, M.H.; Shahzad, K. Violent Views Detection in Urdu Tweets. In Proceedings of the 2021 15th International Conference on Open Source Systems and Technologies (ICOSST), Lahore, Pakistan, 15–16 December 2021; pp. 1–6. [Google Scholar]
  27. Haidar, B.; Chamoun, M.; Serhrouchni, A. A multilingual system for cyberbullying detection: Arabic content detection using machine learning. Adv. Sci. Technol. Eng. Syst. J. 2017, 2, 275–284. [Google Scholar] [CrossRef]
  28. Karayiğit, H.; Acı, Ç.İ.; Akdağlı, A. Detecting abusive Instagram comments in Turkish using convolutional Neural network and machine learning methods. Expert Syst. Appl. 2021, 174, 114802. [Google Scholar] [CrossRef]
  29. Sigurbergsson, G.I.; Derczynski, L. Offensive language and hate speech detection for Danish. arXiv 2019, arXiv:1908.04531. [Google Scholar]
  30. Risch, J.; Stoll, A.; Wilms, L.; Wiegand, M. Overview of the GermEval 2021 shared task on the identification of toxic, engaging, and fact-claiming comments. In Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments, Duesseldorf, Germany, 6 September 2021; pp. 1–12. [Google Scholar]
  31. Kumar, R.; Ojha, A.K.; Zampieri, M.; Malmasi, S. Proceedings of the first workshop on trolling, aggression and cyberbullying (TRAC-2018). In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Santa Fe, NM, USA, 25 August 2018. [Google Scholar]
  32. Bosco, C.; Felice, D.; Poletto, F.; Sanguinetti, M.; Maurizio, T. Overview of the evalita 2018 hate speech detection task. In Proceedings of the EVALITA 2018-Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian, Turin, Italy, 12–13 December 2018; Volume 2263, pp. 1–9. [Google Scholar]
  33. Talpur, K.; Yuhaniz, S.; Sjarif, N.; Ali, B. Cyberbullying detection in roman urdu language using lexicon based approach. J. Crit. Rev. 2020, 7, 834–848. [Google Scholar]
  34. Rizwan, H.; Shakeel, M.H.; Karim, A. Hate-speech and offensive language detection in roman Urdu. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. 16–20 November 2020; pp. 2512–2522. [Google Scholar]
  35. Mehmood, K.; Essam, D.; Shafi, K.; Malik, M.K. An unsupervised lexical normalization for Roman Hindi and Urdu sentiment analysis. Inf. Process. Manag. 2020, 57, 102368. [Google Scholar] [CrossRef]
  36. Rana, T.A.; Shahzadi, K.; Rana, T.; Arshad, A.; Tubishat, M. An Unsupervised Approach for Sentiment Analysis on Social Media Short Text Classification in Roman Urdu. Trans. Asian Low Resour. Lang. Inf. Process. 2021, 21, 1–16. [Google Scholar] [CrossRef]
  37. Dewani, A.; Memon, M.A.; Bhatti, S. Cyberbullying detection: Advanced preprocessing techniques & deep learning architecture for Roman Urdu data. J. Big Data 2021, 8, 1–20. [Google Scholar]
  38. Shahroz, M.; Mushtaq, M.F.; Mehmood, A.; Ullah, S.; Choi, G.S. RUTUT: Roman Urdu to Urdu translator based on character substitution rules and unicode mapping. IEEE Access 2020, 8, 189823–189841. [Google Scholar] [CrossRef]
  39. Velankar, A.; Patil, H.; Joshi, R. A review of challenges in machine learning based automated hate speech detection. arXiv 2022, arXiv:2209.05294. [Google Scholar]
  40. Dewani, A.; Memon, M.A.; Bhatti, S. Development of computational linguistic resources for automated detection of textual cyberbullying threats in Roman Urdu language. 3 c TIC Cuad. Desarro. Apl. Las TIC 2021, 10, 101–121. [Google Scholar] [CrossRef]
  41. Naseem, U.; Razzak, I.; Eklund, P.W. A survey of pre-processing techniques to improve short-text quality: A case study on hate speech detection on twitter. Multimed. Tools Appl. 2021, 80, 35239–35266. [Google Scholar] [CrossRef]
  42. Rahimi, Z.; Homayounpour, M.M. The impact of preprocessing on word embedding quality: A comparative study. Lang. Resour. Eval. 2022, 1–35. [Google Scholar] [CrossRef]
  43. Alam, K.S.; Bhowmik, S.; Prosun, P.R.K. Cyberbullying detection: An ensemble based machine learning approach. In Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India, 4–6 February 2021; pp. 710–715. [Google Scholar]
  44. Mehta, H.; Passi, K. Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI). Algorithms 2022, 15, 291. [Google Scholar] [CrossRef]
  45. Muneer, A.; Fati, S.M. A comparative analysis of machine learning techniques for cyberbullying detection on Twitter. Future Internet 2020, 12, 187. [Google Scholar] [CrossRef]
Figure 1. Proposed approach for detecting cyberbullying patterns in Colloquial Roman Urdu.
Figure 1. Proposed approach for detecting cyberbullying patterns in Colloquial Roman Urdu.
Applsci 13 02062 g001
Figure 2. Text preprocessing steps on Colloquial Roman Urdu micro text.
Figure 2. Text preprocessing steps on Colloquial Roman Urdu micro text.
Applsci 13 02062 g002
Figure 3. Tokenization of Roman Urdu text.
Figure 3. Tokenization of Roman Urdu text.
Applsci 13 02062 g003
Figure 4. Colloquial variant handling in Roman Urdu language.
Figure 4. Colloquial variant handling in Roman Urdu language.
Applsci 13 02062 g004
Figure 5. Domain-specific stop word handling in Roman Urdu.
Figure 5. Domain-specific stop word handling in Roman Urdu.
Applsci 13 02062 g005
Figure 6. Slang and contractions in Roman Urdu.
Figure 6. Slang and contractions in Roman Urdu.
Applsci 13 02062 g006
Figure 7. Frequency distribution of the tokens for cyberbullying and non-cyberbullying comments by Roman Urdu users.
Figure 7. Frequency distribution of the tokens for cyberbullying and non-cyberbullying comments by Roman Urdu users.
Applsci 13 02062 g007
Figure 8. Word count distribution for cyberbullying tweets.
Figure 8. Word count distribution for cyberbullying tweets.
Applsci 13 02062 g008
Figure 9. Word count distribution for non-cyberbullying tweets.
Figure 9. Word count distribution for non-cyberbullying tweets.
Applsci 13 02062 g009
Figure 10. Average word length distribution for non-cyberbullying.
Figure 10. Average word length distribution for non-cyberbullying.
Applsci 13 02062 g010
Figure 11. Average word length distribution for cyberbullying.
Figure 11. Average word length distribution for cyberbullying.
Applsci 13 02062 g011
Figure 12. Precision–Recall curves for the proposed algorithms at highest accuracy level in each case. (a) SVM (Tfidf + unigrams). (b) Naïve Bayes (Tfidf + unigrams). (c) Logistic regression (Tfidf + unigrams). (d) Decision tree (unigrams + trigrams). (e) AdaBoost (unigrams + trigrams). (f) Xboost (Tfidf + unigrams). (g) Bagging classifier (Tfidf + Uni).
Figure 12. Precision–Recall curves for the proposed algorithms at highest accuracy level in each case. (a) SVM (Tfidf + unigrams). (b) Naïve Bayes (Tfidf + unigrams). (c) Logistic regression (Tfidf + unigrams). (d) Decision tree (unigrams + trigrams). (e) AdaBoost (unigrams + trigrams). (f) Xboost (Tfidf + unigrams). (g) Bagging classifier (Tfidf + Uni).
Applsci 13 02062 g012
Figure 13. Precision–Recall curves for the proposed algorithms at highest accuracy level in each case. (a) SVM (Tfidf + unigrams). (b) Naïve Bayes (Tfidf + unigrams). (c) Logistic regression (Tfidf + unigrams). (d) Decision tree (unigrams + trigrams). (e) AdaBoost (unigrams + trigrams). (f) Xboost (Tfidf + unigrams). (g) Bagging classifier (Tfidf + uni).
Figure 13. Precision–Recall curves for the proposed algorithms at highest accuracy level in each case. (a) SVM (Tfidf + unigrams). (b) Naïve Bayes (Tfidf + unigrams). (c) Logistic regression (Tfidf + unigrams). (d) Decision tree (unigrams + trigrams). (e) AdaBoost (unigrams + trigrams). (f) Xboost (Tfidf + unigrams). (g) Bagging classifier (Tfidf + uni).
Applsci 13 02062 g013
Figure 14. Proposed approach for detecting cyberbullying patterns in colloquial Roman Urdu.
Figure 14. Proposed approach for detecting cyberbullying patterns in colloquial Roman Urdu.
Applsci 13 02062 g014
Figure 15. Prediction probability score for implicit Roman Urdu text.
Figure 15. Prediction probability score for implicit Roman Urdu text.
Applsci 13 02062 g015
Figure 16. Prediction probability score for explicit Roman Urdu text.
Figure 16. Prediction probability score for explicit Roman Urdu text.
Applsci 13 02062 g016
Table 1. Example of Roman Urdu Script.
Table 1. Example of Roman Urdu Script.
Roman Urdu Text (LTR Pattern and Latin/Roman Script)Urdu Text (RTL Pattern)Transliteration in English
Ye khobsorat shehar hai, roshniyon ka sheharیہ خوبصورت شہر ہے روشنیوں کا شہرThis is a beautiful city, city of lights.
kuch pata chalay to mujhay bhi inform karnaکچھ پتا چلے تو مجھے بھی انفارم کرناIf you come to know anything, then please do inform me as well.
Ye dress kafi acha hai aaj kal aisay colors t o chal rahay hainیہ ڈریس کافی اچھا ہے آج کل ایسے لرس تو چل رہے ہیںThis dress is too good, and these kinds of colors are in trend nowadays.
Tum ne bahut acha likha hai, lagta hai jaisay pehlay he writer hoتم نے بہت اچھا لکھا ہے لگتا ہے جیسے پہلے ہے رائیٹر ہوYou have written well, seems like you are already a writer.
Table 2. Example of Roman Urdu instances from Roman Urdu corpora.
Table 2. Example of Roman Urdu instances from Roman Urdu corpora.
Tweet (Roman Urdu text)ProfanityElaboration
Sahi hai ab tumhare himat hai to akaile main mujhse milnay aana samjhata hon phir.HatefulAn utterance indicating the sign of blackmailing and involves psychological and physical threat.
Apni khair manao aur bas dekhtay jao, main tum ko choron gi nahen.HatefulAn utterance indicating the sign of blackmail and involves psychological and physical threat.
Na shakal na akal kis ko pari hai tumhain galay bandhnay ki.HatefulAn utterance involving degrading, abusive, and offensive language that is meant to insult an addressee.
Drama acha kar leti ho tum shame on you bloddy bitch chullu bhar pani me doob maro.HatefulAn utterance involving degrading, abusive, and offensive language that is meant to insult an addressee.
jutt zameendaar aadmi hai ye to akal bhi to utnna chalae ga na.HatefulAn utterance involving words or practices for disadvantaged people to insult them because of their color, caste, culture, race, or ethnic origin.
chal oye idea acha hey wese.Non-hateExpression of warm approval to someone’s thoughts and ideas.
Khuda karay ga tum hamesha takleef aur pareshani main raho gay kabhi khushi na milay.HatefulAn expression indicting exclusion and showing a wish that some form of misfortune or adversity will befall the victim.
dono sisters twins hain kitni piyari hain na dikhnay main.Non-hateAn utterance expressing admiration for the physical appearance of a person.
Table 3. Colloquial Roman Urdu variants.
Table 3. Colloquial Roman Urdu variants.
Roman Urdu LanguageUrdu Transliteration
“Idhar” janaa tha sath chaltay hain phirادھر جانا تھا ساتھ چلتے ہیں پھر
“Idharr” janaa tha sath chaltay hain phirادھر جانا تھا ساتھ چلتے ہیں پھر
“Iddhar” janaa tha sath chaltay hain phirادھر جانا تھا ساتھ چلتے ہیں پھر
“Idhhar” janaa tha sath chaltay hain phirادھر جانا تھا ساتھ چلتے ہیں پھر
Table 4. Roman Urdu stop words.
Table 4. Roman Urdu stop words.
Sample Stop Words in Roman Urdu
ab, abhi, aese, aur, aye, ayi, bhi, bas, chal, dain, de, phir, ga, diya di, diya, dono, gai, ge, hui, hum, in, ise, hun, in, isko, yon, waisa
Table 5. Accuracy comparison of proposed models. Most significant results are in bold.
Table 5. Accuracy comparison of proposed models. Most significant results are in bold.
Feature EngineeringApproach
SVMNBLRDTAda BoostXGBoostBagging
Tfidf + Unigrams0.83± 0.010.78± 0.040.80± 0.030.76± 0.020.70± 0.030.79± 0.040.78± 0.03
Tfidf + Bigrams0.69 ± 0.030.68 ± 0.050.71 ± 0.040.68 ± 0.020.58 ± 0.040.61 ± 0.020.68 ± 0.03
Tfidf + Trigrams0.64 ± 0.010.65 ± 0.020.67 ± 0.030.64 ± 0.030.58 ± 0.030.57 ± 0.050.65 ± 0.02
Unigram + Bigrams0.83± 0.020.77 ± 0.040.79 ± 0.050.76± 0.020.70± 0.050.76 ± 0.030.77 ± 0.04
Unigram + Trigrams               0.68 ± 0.040.78± 0.050.79 ± 0.030.76± 0.010.70± 0.020.76 ± 0.040.76 ± 0.02
Bigrams + Trigrams0.68 ± 0.020.71 ± 0.030.70 ± 0.040.67 ± 0.020.59 ± 0.010.61 ± 0.030.68 ± 0.01
Table 6. Performance evaluation of proposed models based on precision, recall, and F1 measure.
Table 6. Performance evaluation of proposed models based on precision, recall, and F1 measure.
TechniqueParametersPrecisionRecallF1 Score
CyberbullyingNon-CBCyberbullyingNon-CBCyberbullyingNon-CB
SVMTfidf + Uni0.83 ± 0.020.83 ± 0.010.78 ± 0.030.88 ± 0.020.80 ± 0.020.85 ± 0.02
NBTfidf + Uni0.81 ± 0.040.76 ± 0.030.64 ± 0.020.88 ± 0.030.71 ± 0.030.81 ± 0.03
LRTfidf + Uni0.80 ± 0.050.80 ± 0.060.73 ± 0.040.86 ± 0.020.76 ± 0.040.83 ± 0.02
DTUni + Tri0.74 ± 0.030.78 ± 0.020.72 ± 0.040.80 ± 0.030.73 ± 0.030.79 ± 0.02
AdaBoostUni + Tri0.65 ± 0.050.75 ± 0.060.71 ± 0.040.70 ± 0.030.68 ± 0.040.72 ± 0.04
XGBoostTfidf + Uni0.76 ± 0.020.80 ± 0.030.74 ± 0.010.82 ± 0.020.75 ± 0.020.81 ± 0.02
BaggingTfidf + Uni0.77 ± 0.030.78 ± 0.020.70 ± 0.020.84 ± 0.030.73 ± 0.020.81 ± 0.02
Table 7. Evaluation metric comparison of proposed models using BoW Approach.
Table 7. Evaluation metric comparison of proposed models using BoW Approach.
ModelBoW Approach (Evaluation Metric ± Standard Deviation)
AccuracyPrecisionRecallF1 Score
SVM0.80 ± 0.020.78 ± 0.010.74 ± 0.20.76 ± 0.01
Multinomial NB0.80 ± 0.030.79 ± 0.020.76 ± 0.030.77 ± 0.02
LogisticRegression0.82± 0.040.79 ± 0.030.78 ± 0.040.79 ±0.03
Decision Tree0.76 ± 0.030.75 ± 0.030.69 ± 0.020.72 ± 0.03
Ada boost0.71 ± 0.020.66 ± 0.040.74 ± 0.020.69 ± 0.02
XGBoost0.75 ± 0.060.71 ± 0.040.73 ± 0.030.72 ± 0.03
Bagging classifier0.78± 0.020.80 ± 0.040.66 ± 0.010.72 ± 0.02
Table 8. Time complexity of algorithms. Most significant results are in bold.
Table 8. Time complexity of algorithms. Most significant results are in bold.
No.AlgorithmTraining and Testing Time (s)Parameters
1SVM35.1Tfidf + Uni.
2Multinomial NB0.156Tfidf + Uni.
3LR0.06Tfidf + Uni.
4DT2.250Uni + Tri.
5AdaBoost1.63Uni + Tri.
6XGBoost29.77Tfidf + Uni.
7Bagging20.89Tfidf + Uni.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dewani, A.; Memon, M.A.; Bhatti, S.; Sulaiman, A.; Hamdi, M.; Alshahrani, H.; Alghamdi, A.; Shaikh, A. Detection of Cyberbullying Patterns in Low Resource Colloquial Roman Urdu Microtext using Natural Language Processing, Machine Learning, and Ensemble Techniques. Appl. Sci. 2023, 13, 2062. https://doi.org/10.3390/app13042062

AMA Style

Dewani A, Memon MA, Bhatti S, Sulaiman A, Hamdi M, Alshahrani H, Alghamdi A, Shaikh A. Detection of Cyberbullying Patterns in Low Resource Colloquial Roman Urdu Microtext using Natural Language Processing, Machine Learning, and Ensemble Techniques. Applied Sciences. 2023; 13(4):2062. https://doi.org/10.3390/app13042062

Chicago/Turabian Style

Dewani, Amirita, Mohsin Ali Memon, Sania Bhatti, Adel Sulaiman, Mohammed Hamdi, Hani Alshahrani, Abdullah Alghamdi, and Asadullah Shaikh. 2023. "Detection of Cyberbullying Patterns in Low Resource Colloquial Roman Urdu Microtext using Natural Language Processing, Machine Learning, and Ensemble Techniques" Applied Sciences 13, no. 4: 2062. https://doi.org/10.3390/app13042062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop