Arabic Text Clustering Using Self-Organizing Maps and Grey Wolf Optimization

: Arabic text clustering is an essential topic in Arabic Natural Language Processing (ANLP). Its signiﬁcance resides in various applications, such as document indexing, categorization, user review analysis, and others. After inspecting the current work on clustering Arabic text, it is observed that most researchers focus on applying K-Means clustering while hindering other clustering techniques. Our evaluation shows that K-Means has a weakness of inconsistent clustering results and weak clustering performance when the data dimensionality increases. Unlike K-Means clustering, Artiﬁcial Neural Networks (ANN) models such as Self-Organizing Maps (SOM) demonstrated higher accuracy and efﬁciency in clustering even with high dimensional datasets. In this paper, we introduce a new clustering model based on an optimization technique called Grey Wolf Optimization (GWO) used conjointly with SOM clustering to enhance its clustering performance and accuracy. The evaluation results of our proposed technique show an improvement in the effectiveness and efﬁciency in comparison with state-of-the-art approaches.


Introduction
Clustering text documents is an important field in the area of Natural Language Processing (NLP) as it simplifies the tedious process of categorizing specific documents among millions of resources, especially when metadata such as key phrases, titles, and labels are not available.Text clustering is valuable for different applications, including topic extraction, spam filtering, automatic document categorization, user reviews analysis, and fast information retrieval.
The process of clustering text written in natural languages is complicated, especially for the Arabic language.One of the complications in Arabic is the language's morphological complexity.For instance, a word in Arabic can be written in several forms that might exceed ten forms [1].Ambiguity is also another major complication in the Arabic language, which is caused by the richness and complexity of Arabic morphology [1,2].There are various other factors in the Arabic language causing difficulty in text clustering.Among these factors are the different dialects for different regions.Texts from different regions may exhibit significant linguistic variations.Moreover, in the Arabic language, the ordering of words in a sentence provides quite different interpretations for that sentence [3,4].
Several Arabic text clustering techniques have been proposed by researchers to encounter these challenges.Among the various techniques, it has been concluded that the K-Means clustering algorithm is the most widely applied, and that is due to its simplicity and efficiency in comparison with other clustering algorithms [2,[5][6][7].However, the initiation process of K-Means weakens its accuracy results.The initiation starts with plotting the centers of the clusters randomly and then assigning documents to the nearest center.If the initiation process is inaccurate, then the clustering will be imprecise [8].Researchers proposed the use of K-Means++, which is an improved algorithm for the initialization process of K-Means [9].However, our experiments show that even with this smart initialization process, the accuracy of the clustering is low compared to other techniques.Researchers also proposed the use of other clustering techniques, such as Suffix Tree clustering [10] and SOM [11].Suffix Tree clustering has a limitation of overlapping documents in different clusters [12], while SOM clustering techniques demonstrated high effectiveness in clustering text even with high-dimensional datasets [13][14][15][16][17].
In this paper, we introduce a new optimized SOM clustering approach that utilizes Grey Wolf Optimization (GWO) [18] to enhance the clustering performance and accuracy of the traditional SOM clustering.To the best of our knowledge, the integration of SOM and the GWO algorithm is the first of its kind.Hence, we also investigate its efficiency and effectiveness.We evaluate our proposed approach using different clustering metrics, such as the F1-score, precision, recall, and accuracy.More specifically, the contributions of this paper are as follows: • A novel Arabic text clustering approach that is based on Self-Organizing Maps (SOM) and Grey Wolf Optimization (GWO).

•
An extensive overview of the research that is related to our approach.• An evaluation of our proposed approach that demonstrates its effectiveness and efficiency in comparison with other clustering techniques.

•
A publicly available implementation of our proposed approach.
The remainder of this paper is organized as follows.In Section 2, we give an overview of the research related to our approach.Section 3 provides background information on the components used in our approach.In Section 4, we describe the details of our proposed Arabic text clustering approach.We provide a detailed experimental evaluation in Section 5, and we conclude in Section 6.

Related Work
A wide range of papers have been published aiming to enhance the clustering of Arabic text.In the next three subsections, we present an overview of the recent work related to our paper.In Section 2.1, we provide an overview of recent Arabic text clustering techniques.In Section 2.2, we discuss related work that applies SOM in clustering, and in Section 2.3, we provide an overview of the research related to GWO.

Arabic Text Clustering
Alharwat and Hegazi demonstrated the issue of data mining and data with high dimensions [19].To overcome the addressed problem, the authors applied modeling techniques to the documents before clustering them.The authors used the Modern Standard Arabic (MSA) dataset [20], which has several versions with different preprocessed articles.The outcome of this study showed that normalized data provided better quality in clustering than unnormalized ones.With normalization, the purity of their clusters was 0.933, and the F1-score was 0.8732.Similar to Alharwat and Hegazi, Al-Azzawy et al. used K-Means to cluster an Arabic dataset corpus which contains 20 documents related to news and short anecdotes [21].The highest clustering scores for the precision, recall, and F1-measure were 98%, 88%, and 93%, respectively.Mahmood and Al-Rufaye also addressed the problem of the high dimensionality of documents by minimizing the dimensionality of documents using the Term Frequency (TF), Inverse Document Frequency (IDF), and Term Frequency-Inverse Document Frequency (TF-IDF) feature selection approaches [22].Following that, K-Means and K-Medoids were used for the clustering.The authors implemented their experiment on a 300-document corpus they built.The authors reported that K-Medoids provided more accurate results than K-Means; the first scored 60%, 78%, and 67% for the precision, recall, and F1-measure, respectively, while the second scored 80%, 83%, and 81%, respectively.Another group of researchers used K-Means clustering along with the TF-IDF and Binary Term Occurrence (BTO) feature selection approaches [23].The authors used a dataset that contains 1121 Arabic tweets.The outcome of their work showed that the BTO feature selection approach outperformed the TF-IDF.The literature for clustering Arabic text using K-Means shows high variation in the performance scores for clustering Arabic text, which could be attributed to the instability and inconsistency of the K-Means clustering algorithm.
To overcome the limitations of the K-Means random initiation of cluster centroids, researchers used PSO-optimized K-Means to cluster Arabic text [24][25][26].The use of Particle Swarm Optimization (PSO) contributes to selecting the initial seeds of K-Means.A group of researchers implemented their algorithm for the purpose of Quran verses theme clustering [24], whereas another group [25,26] used three different datasets, named BBC, CNN, and OSAC [27].The outcome of these research papers demonstrated the effectiveness of applying optimization methods for enhancing the accuracy of the clustering models used.
Another work on clustering Arabic documents was based on the sentiment orientation and context of words in the data corpus [5].The authors used the Brown clustering algorithm on user reviews of several topics, such as news, movies, and restaurants.The data in this research were collected from several sources [28][29][30][31].The evaluation results of this approach showed that the subjectivity and polarity of the clustering documents provided rates of 96% and 85%, respectively.The evaluation results indicated that the number of clusters also affects the accuracy rates, showing that fewer clusters provide better results.
In another work [2], the authors used a combination of Markov Clustering, Fuzzy-C-Means, and Deep Belief Neural Networks (DBN) in an attempt to cluster Arabic documents.Two datasets were used in this study; the first was acquired from the Al-Jazeera news website with 10,000 documents and the second from a Saudi Press Agency [32] with 6000 documents.The clustering precision, recall, and F1-measure resulted in 91.2%, 90.9%, and 91.02%, respectively.The model that was used was highly impacted by the feature selection of the root words leading to imprecise clustering results.
Al-Anzi and Abuzeina [11] used Expectation-Maximization (EM), SOM, and K-Means algorithms to cluster Arabic documents.They built a corpus of 1000 documents extracted from a Kuwaiti newspaper website called Alanba [33].The documents cover different topics, such as health, technology, sports, politics, and others.The authors then compared the evaluation of the three clustering algorithms.They reported that SOM obtained the highest accuracy between the three algorithms with a rate of 93.4%.From this study, it appears that the use of SOM in clustering Arabic text is promising.
The Bond Energy Algorithm (BEA) was also used by researchers to cluster Arabic text [34].The results of this study showed that the BEA algorithm outperforms K-Means clustering in terms of precision, recall, and the F1-score.
In the broader field of text clustering, researchers also proposed the use of prototypebased models for text clustering [35].The results of this work showed that it outperforms K-Means clustering.
To conclude, most of the current work on Arabic text clustering used K-Means clustering because it is a simple model and can be applied easily.However, the mechanism that K-Means follows has limitations.For instance, K-Means first initiates centers of clusters and then assigns documents to these clusters.If the initiation process of K-Means is not well formulated, then the risk of incorrect clustering arises.Moreover, techniques that integrate K-Means clustering with Particle Swarm Optimization [26] have promising results.This shows that optimization contributes positively to clustering models.In addition, previous work showed that the use of SOM provided better clustering results than K-Means for Arabic text [11].We hypothesize that integrating SOM with an optimization method would result in better clustering as we are presenting in this paper.Table 1 presents a summary of the recent work regarding Arabic text clustering.

Self-Organizing Maps
Researchers have applied the Self-Organizing Maps (SOM) clustering algorithm in several domains, such as speech recognition [36], medical imaging and analysis [37], classification of satellite images [38], and others.The following presents some applications of the SOM algorithm.
He et al. attempted to resolve the issue of the sudden disabling of electronic car batteries [39].The authors used SOM to provide the battery's performance.SOM were used to cluster the characteristics of the battery, including the battery's capacity, temperature, voltage, lifespan, internal resistance, and self-discharge rate.Afterward, the result of the clustering provides the driver with different information about the battery and its usage.Also, it alerts the driver when the battery should be replaced.The authors in this study did not mention the accuracy rate of their clustering.However, they compared the K-Means clustering algorithm with SOM and concluded that the latter outperformed K-Means.Bara et al. applied the SOM algorithm to analyze students' E-Learning activities [40].SOM are applied to provide clusters for the E-Learning activities to investigate the relation between the students' activities regarding the E-Learning portal and their academic performance.The authors obtained a dataset from the Universiti Teknologi Malaysia (UTM) Moodle LMS log records.Then, SOM were applied to cluster students according to their E-Learning activities.In their evaluation, the authors observed a correlation between the students' performance and their E-Learning activities, showing that the students' performance is affected by the activities positively.
SOM were also used by Simon and Elias to detect fake followers on Twitter [41].The authors applied their model to a dataset of fake followers provided by the Institute of Informatics and Telematics of the Italian National Research Council.The dataset uses the accounts' related features to categorize the type of user, whether fake or real.The features used include the following count, followers count, favorites count, and others.The use of SOM in this study showed its effectiveness in detecting fake accounts.
Mei et al. used SOM to detect damaged lesions in the brain from the Magnetic Resonance Imaging (MRI) scan for Relapsing Remitting Multiple Sclerosis (RRMS) patients [42].SOM are applied to cluster lesions based on the possessed damage.The authors used a dataset of 10 patients to perform their study.They concluded the effectiveness of analyzing the MRI scans through SOM automatically.
Similarly, in the research by Sarmiento et al. [43], the authors used SOM clustering as a disease early diagnosis tool.The authors used SOM to allocate significant gene paths in the human body.Gene paths can aid in the early diagnosis of different diseases, such as diabetes, heart diseases, and others.The authors used a dataset provided by the Kyoto Encyclopedia of Genes and Genomes (KEGG).They also applied the K-Means clustering algorithm but concluded that SOM performed better than K-Means.
To conclude, the discussed related work of SOM shows its applicability to various research areas.In addition, many researchers performed a comparison between SOM and K-Means clustering algorithms [26,39,43].These researchers mainly found that SOM are more effective than K-Means in clustering.

Grey Wolf Optimization Algorithm
The GWO algorithm has been used in several research areas, including engineering design, economy, astronomy, and others.Since its introduction [18], GWO has gained widespread attention from researchers due to its effectiveness in solving complex optimization problems.The following presents some of its applications.
Guo et al. used an optimized learning algorithm model to provide optimized solutions for mathematical nonlinear equation problems [44].The authors compared the use of two optimization algorithms, GWO and PSO, and their results showed that GWO outperformed PSO.
In another work, the authors compared the use of several optimization algorithms including GWO, the Genetic Algorithm (GA), Ant-Lion Optimization (ALO), the Krill Herd Algorithm (KHA), and others for solving the problem of Combined Economic and Emission Dispatch (CEED) [45].The result of this study showed that the use of GWO achieved higher performance and better solutions.Xiao et al. also compared the use of GWO with Particle Swarm Optimization (PSO) along with a machine learning classifier to detect far orbits through the extraction of image features [46].The PSO-based classifier performance results were low due to the computational complexity, whereas the results of the GWO-based classifier outperformed the PSO-based classifier by 8%.The discussed research shows the effectiveness of using GWO for enhancing the performance of machine learning models.
Researchers have also employed GWO-based models to discover optimal solutions for various engineering problems.For instance, a group of researchers applied GWO to a civil engineering problem involving the design of water distribution networks.The objective was to minimize financial costs and reduce the number of network components, including pipe sizes, pump ratings, and other elements.This approach met the established expectations for both performance and cost perspectives [47].On the other hand, Majeed and Rao [48] built a GWO-based model to automate the process of designing analog circuits.Through this application, they effectively showcased the utility of GWO by producing enhanced circuit designs in a minimal amount of time.
In general, several researchers used optimization techniques to improve the results of different clustering approaches [49][50][51][52].The results of these studies support our hypothesis that using optimization with clustering techniques can help in improving their effectiveness.
To conclude, the use of the GWO algorithm spans several research areas, including medical, engineering, astronomy, and others.The discussed studies show that the use of GWO enhances the performance and the accuracy of the results for the defined problems.

Background
To gain an understanding of the clustering model proposed in this paper, brief details about its components are discussed in this section as follows.

Self-Organizing Maps
SOM are an unsupervised type of Artificial Neural Network (ANN) [53].It uses an unsupervised learning model to create a map of different groups [54].As illustrated in Figure 1, SOM consist of two main layers: the input layer where inputs or neurons are inserted, and the output layer, also called a competitive layer, where groups of similar inputs are formed.The outputs of the SOM are generated by multiplying the inputs with SOM weights.These weights are randomly initiated when training a new network, and then at each iteration, the weights are updated in accordance to Equation (1).Correspondingly, Equation ( 2) is used to update the neighborhood function θ(t) in which the SOM network topology is defined.
• w ij : the weights.
• BMU: The Best Matching Unit reflecting the closest weight for the input instance.
There are two crucial attributes behind the effectiveness of the SOM algorithm.The first attribute is its capability to diminish the input space by moving similar inputs close to eventually form clusters of outputs.The second attribute is forming topological ordering which is based on the location of the neurons in the SOM grid.This ordering is correlated to the input space features [38].

Grey Wolf Optimization Algorithm
Grey Wolf Optimization (GWO) is a nature-inspired meta-heuristic algorithm that reflects the social behavior of grey wolves when hunting to solve optimization problems [18].Particularly, it reflects the hierarchy of leadership and hunting in grey wolves' packs.The hierarchy can be represented as a pyramid: The leader wolf, alpha, making all the decisions, is on top of the pyramid.The second level has the beta, assisting the alpha wolf in their decisions, and it can also substitute the place of the alpha when required.The level after that is called delta, which has the responsibility of protecting the tribe.The lowest level has several wolves, omega, that are dominated by all the other types in the pyramid.The Grey Wolf Optimization (GWO) algorithm consists of these steps: Once the termination criterion is met, the alpha wolf represents the best solution found by the GWO algorithm, which can be used as the optimal solution to the given optimization problem.

Proposed Approach
In this section, we present our proposed approach for clustering Arabic text, leveraging the GWO algorithm to enhance the clustering results for SOM.Instead of relying on the default random initiation process, our approach involves optimizing the clustering process by fine-tuning the SOM's initial weights.Our proposed approach has two main phases.First, we run GWO to find the optimized SOM initial weights.Then, we use the output of the previous phase to adjust the SOM's initial weights and run it.The following are the details of these two phases.

Phase1: Grey Wolf Optimization Algorithm
In order to run the GWO algorithm, we have to set the parameters that are required to execute it.GWO has six parameters as follows: Defining the fitness function for GWO is critical as it must be aligned with its purpose which, in this paper, is minimizing the clustering Quantification Error (QE) value.We compute the fitness function by running the SOM clustering with only ten epochs and computing the clustering QE.The dimension parameter represents the SOM's weights' shape.It is calculated by multiplying the number of features in the dataset with the SOM's dimension.Both values, the number of features and SOM dimension, are obtained from the dataset metadata [18,55].The lower and upper bound values represent the range of possible SOM weights which is [−1, 1].Furthermore, the dataset parameter represents the actual dataset that we are using for the clustering.The last parameter is the number of search agents which is chosen empirically and varies depending on the problem being solved (see Section 5 for details).
After setting the parameters, we run the GWO algorithm until the stopping criterion is met, which we define in our approach as having five consecutive iterations without improvement in the fitness value.By the end, the GWO algorithm returns the position of the alpha wolf which has the best fitness value.The position of the alpha wolf reflects the initial weight to be used for the SOM clustering algorithm.

Phase2: Self-Organizing Maps Optimization
Figure 2 illustrates the abstract idea of optimizing the SOM algorithm.In the second phase, we use the value of the best solution in the first phase as an initial weight for the SOM algorithm.Our hypothesis is that using the values that were obtained from the GWO algorithm will cause the SOM clustering algorithm to converge faster and provide better clustering results.In order to implement the optimized version of SOM, we used a Python library called MiniSom [55].The process of clustering starts with setting the initial weights of the SOM to the weights computed in the first phase.Then, we run the SOM clustering using different parameters' values, which we empirically evaluate in Section 5.

Experimental Evaluation
To assess the effectiveness of our proposed model for clustering Arabic text, we implemented it to evaluate its accuracy and efficiency.The full source code of our implementation is publicly available to ensure the reproducibility of our results [56].In this section, we address the following research questions: To answer these research questions, we carried out an empirical evaluation on two Arabic datasets, the MSA dataset [25] and NADA dataset [57], and compared our results with two other models for clustering Arabic text, K-Means and standard non-optimized SOM. Figure 3 shows an illustration of the experiment process we used in this paper.The experiment starts with collecting data sources for Arabic text.Then, each of these sources is preprocessed and represented using two different representation techniques.After that, the outcome of the representation is trained using three clustering techniques, K-Means, SOM, and GWO-optimized SOM (our model).The final step is to evaluate the clustering in terms of the accuracy, F1-score, precision, recall, and training time for each experiment.All the experiments are executed using a MacBook Pro with a 2.3 GHz Intel Core i5 processor and 8 GB 2133 MHz LPDDR3 RAM.The following subsections show the details of each of these steps.

Data Collection
For the purpose of evaluating our proposed model, we carried out our experiments in two datasets, the MSA corpus [20] and NADA corpus [57].Note that both datasets are processed in the same way.
The MSA corpus includes nine subjects related to technology, sports, religion, politics, literature, law, health, economy, and art.There are five versions of each category: The first has the original dataset, and the second contains the data after removing stop words and punctuation.The third and fourth versions contain the data after applying a stemmer.The last version contains the data after extracting the roots.Each category contains 300 documents which results in 2700 documents for each version.In our experiments, we used the first version and applied our own preprocessing procedure (described in Section 5.2).Table 2 presents the subject distribution for the MSA corpus.The second corpus is NADA.It is a new corpus built by integrating topics from other Arabic corpus sources, such as OSAC [27] and the Diab Dataset [58].NADA includes ten categories of text files which are about Arabic literature, economical social sciences, political social sciences, law, sports, art, Islamic religion, computer science, health, and astronomy, with a total number of 7310 text files.The distribution of the text files is illustrated in Table 3.

Data Preprocessing
After collecting the data, the second step is preprocessing the datasets.The preprocessing of Arabic text poses significant challenges due to the language's complex morphological structure and syntactical rules.In Arabic, a single word can comprise multiple independent tokens, and morphological knowledge of the language needs to be incorporated into the tokenizer [59].Stemming is also challenging given the diverse range of morphological configurations and diacritics in the language.To address these challenges, our preprocessing has two stages:

•
Text stemming and morphological analysis.
In our experiments, cleaning and preprocessing the datasets is executed using PyArabic [60] and ISRIStemmer [61]. Figure 4 presents a sample text file before and after the preprocessing phase.

Text Tokenization
This step is used to separate and identify words in the text by breaking down the words that comprise multiple tokens, eliminating white spaces, punctuation marks, and mark-ups.Text tokenization defines the word boundaries as described in [59].

Stemming and Morphological Analysis
Stemming is used to eliminate the suffix and prefix from the words in the text files.The steps to perform the stemming are as follows: 1.
Remove the diacritics from the Arabic word.

2.
Remove the prefixes from the Arabic word.

3.
Remove the suffixes from the Arabic word.4.
Remove the connective letters from the Arabic word.

5.
Normalize the initial Hamza to bare Alif.
Following the stemming is the morphological analysis which returns the root of the words.The Stem function from the ISRIStemmer library [61] combines the stemming and the morphological analysis by directly returning the root of the words.

Data Representation
The outcome of the preprocessing phase is used for representation.Data representation is an essential phase because datasets cannot be executed directly by the clustering algorithm.In this phase, characters are mapped into predefined vectors of real numbers, meaning that the words in the text files are mapped into vectors using a Term Frequency model.Two representation methods are used, the CountVectorizer (CV) and Term Frequency-Inverse Document Frequency (TF-IDF) Vectorizer.

CV
The representation CountVectorizer (CV) counts the words in a document.The CV is a simple technique that provides satisfactory results.This representation method counts the number of times a word appears in a document and uses this count as a weight for the word.The final output of this representation method is a matrix of the words in a document along with the number of occurrences [62].

TF-IDF Vectorizer
Term Frequency-Inverse Document Frequency (TF-IDF) [63] is a representation that measures how relevant a word is to a document in a collection of documents.The term has two parts which are multiplied with each other.The first is the Term Frequency (TF), also called Term Score, which considers all the files in the dataset as a bag of words.The TF is calculated by dividing the number of word occurrences in a document by the total number of words in the same document.The second part is the Inverse Document Frequency (IDF) which is calculated by obtaining the exponential logarithm of the total number of documents in the dataset divided by the number of documents having a specified word w.

Clustering Techniques
The output of the data representation phase is the input for the next phase which is the clustering.Clustering is performed to identify the different clusters each document belongs to.Because there are two representation techniques, CV and TF-IDF, as discussed in Section 5.3, we apply the clustering techniques to all representations for the purpose of our evaluation.The clustering techniques we used in our evaluation are as follows: 1.
The Self-Organizing Maps clustering technique.

3.
The Grey Wolf-Optimized Self-Organizing Maps clustering technique (our proposed model).
In the following, we show the details of the implementation for each clustering technique.

K-Means Clustering Technique
K-Means is a clustering algorithm that provides k numbers of clusters.Its mechanism is based on placing center points and each center represents a cluster.Data are passed to the K-Means algorithm and then the distance between the centers and each sample of data is calculated.The sample is then assigned to the center with the shortest distance.Applying K-Means for the two datasets is straightforward because the number of clusters for each dataset is known.For the purpose of our experiment, we use the implementation of K-Means that is available in the scikit-learn Python library [64].This implementation of K-Means uses K-Means++ initialization, which is an enhanced initialization process for K-Means.Experiments have shown that K-Means++ initialization has improved performance compared to the standard K-Means [9].

Self-Organizing Maps Clustering Technique
The mechanism in which the Self-Organizing Maps (SOM) work is discussed in Section 3.1.For the purpose of our experiment, we use the implementation of SOM that is available in the MiniSom Python library [55].Clustering using SOM requires setting six parameters: For the purpose of tuning the parameters of the SOM, the dataset was divided into three parts, training, validation, and testing, with 60%, 20%, and 20%, respectively.The first two parts were used to tune the parameters while the last part was used to generate the final clustering results.
The first two parameters, dim x and dim y , define the dimension of the map.There are two methods to set the SOM's dimension.The first method is considered to be a rule of thumb where the dimension is obtained from the number of features in the dataset using Equation (3).
On the other hand, the second method requires tuning the SOM to obtain the least consistent QE value.We used both methods and concluded that the first method where the dimension is obtained from Equation (3) provided better clustering results.The third parameter is the number of features in the dataset which is equivalent to the number of columns in the dataset's representation version.The sigma and learning rate have default values of 1 and 0.5 consequently.The final parameter represents the number of epochs for training the clustering model.For each dataset, the number of training epochs was tuned while fixing the other five parameters.The numbers of tested epochs were 10, 50, 100, 150, 200, 300, 400, 500, 1000, 2000, 5000, 7000, 8000, 9000, and 10,000.For each epoch, we observed the relation between the SOM's dimension and the QE.Eventually, the number of epochs is selected when the QE reaches the lowest consistent value.Figure 5 shows the results for fine-tuning the number of epochs parameter.

Grey Wolf-Optimized Self-Organizing Maps Clustering Technique
The mechanism we used to optimize the SOM is described in Section 4. The following are the steps we used for evaluating the optimized SOM.Note that these steps are executed on all datasets (MSA and NADA) with both representation methods (CV and TF-IDF), and 10-fold cross-validation is performed to validate the obtained results: 1.
Set the parameters of the GWO as follows: • Fitness function: We used the QE function provided by the MiniSom python library as a fitness function for the Grey Wolf Optimization algorithm.• Dimension: We set the dimension to be the product of the total number of corpus features by the selected SOM dimensions, dim x and dim y .

•
The number of search agents: For the purpose of this paper, we chose the number of search agents to be five.Note that the initial experiments with different numbers of search agents were performed and the difference between their results was negligible.

3.
Initiate the positions of the most powerful wolves, alpha, beta, and delta.The initial positions are set as a matrix of 0 s.

4.
Initiate random values for the search agents with the same GWO dimension.5.
Calculate the fitness value using the QE, on the validation data portion, provided by the MiniSom Python library.6.
With every fitness calculation, the result is compared against the alpha, beta, and delta wolves.Accordingly, if the new fitness is less than the values of these wolves, update the wolves' values with the new fitness.7.
When a stopping criterion is met, whether completing the maximum number of iterations or having no improvement in fitness for five consecutive iterations, the GWO function terminates by passing the value of the fitness into the MiniSom to train the model.8.
Test the model using the testing data portion and report the results as shown in Table 4.

Results and Discussion
In this section, we present the results of our experiments and discuss our research questions.Table 4 shows the overall results of our experiments that we executed as we described in the previous subsections, noting that we performed 10-fold cross-validation on the training and validation data in our experiments.The following is a discussion of each research question with more details.
To answer RQ1, we used four different metrics to evaluate the effectiveness of GWOoptimized SOM in comparison with K-Means clustering and traditional SOM.These metrics are the F1-score, precision, recall, and accuracy.As can be seen in Table 4, each of the clustering techniques is evaluated for both datasets, MSA and NADA, and also with both representation methods.We also show our results in comparison with the K-Means clustering implementation in [19].As highlighted in the table, optimizing the SOM using the GWO algorithm resulted in improving the clustering for both datasets with both representation methods.The results show a consistent improvement in all the effectiveness metrics we used in our evaluation, indicating the effectiveness of the introduced model.The use of GWO for initializing the weights for SOM allows it to explore a larger space of possible initializations, increasing the chance of finding a more suitable starting point that leads to better final clustering results.
To answer RQ2, we used the training time as a metric to evaluate the efficiency of GWO-optimized SOM in comparison with K-Means clustering and traditional SOM.As can be seen in Table 4, GWO-optimized SOM are faster than traditional SOM in training time for both the MSA and NADA datasets and in both the CV and TF-IDF data representations.We also noticed that the training time for all the clustering approaches is significantly higher in the NADA corpus when compared against the MSA corpus.This is mainly due to the larger size of the NADA corpus.After further investigation of the training time of the SOM and optimized SOM techniques, we found that the overhead of finding the best weights in the optimized SOM is high, but the SOM training time after initializing the weights is significantly reduced, resulting in an overall shorter training time compared to traditional SOM.As can be seen in Table 5, initializing the weights consists of about 30% of the overall running time for the GWO-optimized SOM.To answer RQ3, we compared the two representations, TF-IDF and CV, using the same metrics as in RQ1 and RQ2.As can be seen in Table 4, we observe that the TF-IDF representation technique reported better effectiveness for both datasets in all the evaluated metrics.However, in the GWO-optimized SOM clustering technique, the difference in the effectiveness results for the two representations was negligible.When evaluating the efficiency of the two representation techniques, we found that TF-IDF representation is overall more efficient than CV representation.

Conclusions
In this paper, we introduced a new model for clustering Arabic text.The model is consolidated by SOM and the GWO algorithm.The purpose of applying GWO is feeding SOM with new weights during the initiation of the network.In our experiment, the model was tested on two datasets, MSA and NADA.To obtain an equitable clustering evaluation, several experiments were executed on the same datasets.These included using SOM independently, with optimization, and also using different clustering techniques (i.e., K-Means).After evaluating all the results, we concluded that our introduced model resulted in a significant improvement in clustering accuracy and training time.We hope these results advance the broader Arabic Natural Language Processing (ANLP) field by giving researchers insights on how to improve the results of the clustering they might apply in their research.In the future, we plan to evaluate optimization techniques other than GWO and measure their impact for clustering Arabic texts.We also plan to gather new Arabic datasets encompassing a wider range of topics and subtopics.These additions will allow us to conduct more comprehensive evaluations, ultimately enriching the field of Arabic Natural Language Processing (ANLP).

•
Initialization: Initialize a population of wolves representing potential solutions to the optimization problem.The number of wolves and their positions are generated randomly.• Fitness evaluation: Evaluate the fitness of each wolf in the population by applying the fitness function of the optimization problem.• Alpha, beta, delta, and omega determination: Identify the alpha, beta, and delta wolves based on the population fitness values.• Update positions of the wolves: The beta and delta wolves adjust their positions based on their current position and the position of the alpha wolf, while the omega wolves explore new positions by more random movement in the solution space.•Repeat the second step: Evaluate the fitness of the population again after the position updates and determine the new alpha, beta, delta, and omega wolves based on their new fitness values.• Termination criteria: Check if the termination criteria are met.The termination criteria might include a maximum number of iterations or a fit-enough solution is obtained.

Figure 2 .
Figure 2. Using Grey Wolf Optimization to Optimize Self-Organizing Maps.

• RQ1 :
What is the effectiveness of GWO-optimized SOM in clustering Arabic text compared to other models?• RQ2: How efficient are GWO-optimized SOM in clustering Arabic text compared to other models?• RQ3: What is the impact of data representation techniques on the effectiveness and efficiency of clustering Arabic text?

Figure 4 .
Figure 4.A sample Arabic text preprocessed using the tokenization and stemming steps.

•
dim x : the x dimension of the map.• dim y : the y dimension of the map.• D: the number of features in the dataset.• sig: the sigma value representing the radius of the different neighbors in the SOM representing the different categories.• n(t): the learning rate value.• t: the number of iterations (epochs).
MSA with CV representation.
NADA with TF-IDF representation.

Figure 5 .
Figure 5. Results of fine-tuning the number of epochs parameter for Self-Organizing Maps.

Table 1 .
Arabic text clustering related work comparison.

Table 5 .
Breakdown of the GWO-optimized SOM running time (in minutes).