Robust Representation and Efficient Feature Selection Allows for Effective Clustering of SARS-CoV-2 Variants

The widespread availability of large amounts of genomic data on the SARS-CoV-2 virus, as a result of the COVID-19 pandemic, has created an opportunity for researchers to analyze the disease at a level of detail unlike any virus before it. One one had, this will help biologists, policy makers and other authorities to make timely and appropriate decisions to control the spread of the coronavirus. On the other hand, such studies will help to more effectively deal with any possible future pandemic. Since the SARS-CoV-2 virus contains different variants, each of them having different mutations, performing any analysis on such data becomes a difficult task. It is well known that much of the variation in the SARS-CoV-2 genome happens disproportionately in the spike region of the genome sequence -- the relatively short region which codes for the spike protein(s). Hence, in this paper, we propose an approach to cluster spike protein sequences in order to study the behavior of different known variants that are increasing at very high rate throughout the world. We use a k-mers based approach to first generate a fixed-length feature vector representation for the spike sequences. We then show that with the appropriate feature selection, we can efficiently and effectively cluster the spike sequences based on the different variants. Using a publicly available set of SARS-CoV-2 spike sequences, we perform clustering of these sequences using both hard and soft clustering methods and show that with our feature selection methods, we can achieve higher F1 scores for the clusters.


Introduction
The virus that causes the COVID-19 disease is called the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) -a virus whose genomic sequence is being replicated and dispersed across the globe at an extraordinary rate. The genomic sequences of a virus can be helpful to investigate outbreak dynamics such as spatiotemporal spread, the size variations of the epidemic over time, and transmission routes. Furthermore, genomic sequences can help design investigative analyses, drugs and vaccines, and monitor whether theoretical changes in their effectiveness over time might refer to changes in the viral genome. Analysis of SARS-CoV-2 genomes can therefore complement, enhance and support strategies to reduce the burden of COVID-19 [1].
SARS-CoV-2 is a single-stranded RNA-enveloped virus [2]. Its entire genome is characterized by applying an RNA-based metagenomic next-generation sequencing method. The length of the genome is 29,881 bp (GenBank no. MN908947), encoding 9860 amino acids [3]. Structural and nonstructural proteins are expressing the gene fragments. Structural proteins are encoded by the S, E, M and N genes, while the ORF region encodes nonstructural proteins [4] (see Figure 1).
A key factor involved in infection is the S protein on the surface of the virus [5]. The S protein of SARS-CoV-2 is similar to other coronaviruses and arbitrates receptor recognition, fusion, and cell attachment through viral infection [6][7][8]. The S protein has an essential role in viral infection that makes it a potential target for vaccine development, antibody-blocking therapy, and small molecule inhibitors [9]. Also, the spike region of the SARS-CoV-2 genome is involved in a disproportionate amount of the genomic variation, for its length [10] (see, e.g., Table 1). Therefore, mutations that affect the antigenicity of the S protein are of certain importance [11].
Generally, the genetic variations of a virus are grouped into clades, which can also be called subtypes, genotypes, or groups. To study the evolutionary dynamics of viruses, building pylogenetic trees out of sequences is common [12]. On the other hand, the number of available SARS-CoV-2 sequences is huge and still increasing [13] -building trees on the millions of SARS-CoV-2 sequences would be very expensive and seems impractical. In these cases, machine learning approaches that have flexibility and scalability could be useful [14]. Since natural clusters of the sequences are formed by the major clades, clustering methods would be useful to understand the complexity behind the spread of the COVID-19 in terms of its variation. Also by considering the certain importance of the S protein, we focus on the amino acid (protein) sequences encoded by the spike region. In this way, we would reduce the dimensionality of data without losing too much information, reducing the time and storage space required and making visualization of the data easier [15].
To make use of machine learning approaches, we need to prepare the appropriate input -numerical (real-valued) vectors -that is compatible with these methods. This would give us the ability to perform meaningful analytics. As a result, these amino acid sequences should be converted into numeric characters in a way that preserves some sequential order information of the amino acids within each sequence. The most prevalent strategy in this area is one-hot encoding due to its simplicity [10]. Since we need to compute pairwise distances (e.g., Euclidean distance), one-hot encoding order preservation would not be operational [16]. To preserve order information of each sequence while being amenable to pairwise distance computation, k-mers (length k substrings of each sequence) are calculated and input to the downstream classification/clustering tasks [16,17] (see Figure 2).
The proposed methods in this study are fast and efficient clustering methods to cluster the spike amino acid sequences of SARS-CoV-2. We demonstrate that our method performs considerably better than the basic methods, and the variants are successfully clustered into unique clusters with high F 1 score. The following are the contributions of this paper: 1. For efficient sequence clustering, we propose a method based on k-mers, and show that the downstream clustering methods successfully cluster the variants with high F 1 score. 2. We performed experiments using different clustering algorithms and feature selection approaches and show the trad-off between the clustering quality and the runtime for these methods. 3. We use both hard and soft clustering approaches to study the behavior of different coronavirus variants in detail.
The rest of the paper is organized as follows: Section 2 contains related work of our approach. Our proposed approach is detailed in Section 3. A description of the datasets used are given in Section 4. We provide a detailed discussion about the results in Section 5, and then we conclude our paper in Section 6.

Literature Review
Performing different data analytics tasks on sequences has been done successfully by different researchers previously [16,18]. However, most studies require the sequences to be aligned [10,19,20]. The aligned sequences are used to generate fixed length numerical embeddings, which can then used for tasks such as classification and clustering [16,21,22]. Since the dimensionality of data is another problem while dealing with larger sized sequences, using approximate methods to compute the similarity between two sequences is a popular approach [17,23,24]. The fixed-length numerical embedding methods have been successfully used in literature for other applications such as predicting missing values in graphs [25], text analytics [26][27][28], biology [17,23,29], graph analytics [30,31], classification of electroencephalography and electromyography sequences [32,33], detecting security attacks in networks [34], and electricity consumption in smart grids [35,36]. The conditional dependencies between variables is also important to study so that their importance can be analyzed in detail [37].
Due to the availability of large-scale sequence data for the SARS-CoV-2 virus, an accurate and effective clustering method is needed to further analyze this disease, so as to better understand the dynamics and diversity of this virus. To classify different coronavirus hosts, authors in [10] suggest a one-hot encoding-based method that uses spike sequences alone. Their study reveals that they achieved excellent prediction accuracy considering just the spike portion of the genome sequence instead of using the entire sequence. Using this idea and a kernel method, Ali et al., in [16] accomplish higher accuracy than in [10], in classification of different variants of SARS-CoV-2 in humans. Successfully analysis of different variants leads to designing efficient strategy regarding the vaccination distribution [38][39][40][41].

Proposed Approach
In this section, we discuss our proposed algorithm in detail. We start with the description of k-mers generation from the spike sequences. We then describe how we generated the feature vector representation from the k-mers information. After that, we discuss different feature selection methods in detail. Finally, we detail how we applied clustering approaches on the final feature vector representation.

k-mers Generation
Given a spike sequence, the first step is to compute all possible k-mers. The total number of k-mers that we can generate for a spike sequence are described as follows: where N is the length of the spike sequence (N = 1274 for our dataset). The variable k is a user-defined parameter (we took k = 3 using standard validation set approach [42]). For an example of how to generate k-mers, see Figure 2.

Fixed-Length Feature Vector Generation
Since most of the Machine Learning (ML) models work with a fixed-length feature vector representation, we need to convert the k-mers information into the vectors. For this purpose, we generate a feature vector Φ k for a given spike sequence a (i.e., Φ k (a)). Given an alphabet Σ (characters representing amino acids in the spike sequence), the length of Φ k (a) will be equal to the number of possible k-mers of a. More formally, Since we have 21 unique characters in Σ (namely ACDEFGHIKLMNPQRSTVWXY), the length of each frequency vector is 21 3 = 9261.

Low Dimensional Representation
Since the dimensionality of data is high after getting the fixed length feature vector representation, we apply different supervised and unsupervised methods to obtain a low dimensional representation of data to avoid the problem of the curse of dimensionality [35,43]. Each of the methods for obtaining a low dimensional representation of data is discussed below:

Random Fourier Features
The first method that we use is an approximate kernel method called Random Fourier Features (RFF) [44]. It is an unsupervised approach, which maps the input data to a randomized low dimensional feature space (euclidean inner product space) to get an approximate representation of data in lower dimensions D from the original dimensions d. More formally: In this way, we approximate the inner product between a pair of transformed points. More formally: In Equation (4), z is low dimensional (unlike the lifting φ). Now, z acts as the approximate low dimensional embedding for the original data. We can use z as an input for different ML tasks like clustering and classification.

Least Absolute Shrinkage and Selection Operator (Lasso) Regression
Lasso regression is a supervised method that can be used for efficient feature selection. It is a type of regularized linear regression variants. It is a specific case of the penalized least squares regression with an L 1 penalty function. By combining the good qualities of ridge regression [45,46] and subset selection, Lasso can improve both model interpretability and prediction accuracy [47]. Lasso regression tries to minimize the following objective function: min(Sum of square residuals + α × |slope|) (5) where α × |slope| is the penalty term. In Lasso regression, we take the absolute value of the slope in the penalty term rather than the square (as in ridge regression [46]). This helps to reduce the slope of useless variables exactly equal to zero.

Boruta
The last feature selection method that we are using is Boruta. It is a supervised method that is made all around the random forest (RF) classification algorithm. It works by creating shadow features so that the features do not compete among themselves but rather they compete with a randomized version of them [48]. It captures the non-linear relationships and interactions using the RF algorithm. It then extract the importance of each feature (corresponding to the class label) and only keep the features that are above a specific threshold of importance. The threshold is defined as the highest feature importance recorded among the shadow features.

Clustering Methods
In this paper, we use five different clustering methods (both hard and soft clustering approaches) namely k-means [49], k-modes [50], Fuzzy c-means [51,52], agglomerative hierarchical clustering, and Hierarchical density-based spatial clustering of applications with noise (HDBSCAN) [53,54] (note that is is a soft clustering approach). For the k-means and k-modes, default parameters are used. For the fuzzy c-means, the clustering criterion used to aggregate subsets is a generalized least-squares objective function. For agglomerative hierarchical clustering, a bottom-up approach is applied, which is acknowledged as the agglomerative method. Since the bottom-up procedure starts from anywhere in the central point of the hierarchy and the lower part of the hierarchy is developed by a less expensive method such as partitional clustering, it can reduce the computational cost [55].
HDBSCAN is not just density-based spatial clustering of applications with noise (DBSCAN) but switching it into a hierarchical clustering algorithm and then obtaining a flat clustering based in the solidity of clusters. HDBSCAN is robust to parameter choice and can discover clusters of differing densities (unlike DBSCAN) [54].

Optimal number of Clusters
We determined the optimal number of clusters using the elbow method [56]. It can fit the model with number of clusters K ranging from 2 to 14. As a quality measure, 'distortion' is used, which measures the sum of squared distances from each point to its center. Figure 3 is showing the distortion score for several values of K. We also plot the training runtime (in seconds) to see the trade-off between distortion score and the runtime. We use the "knee point detection algorithm (KPDA)" [56] to determine the optimal number of clusters. Note that based on results shown in Figure 3, the perfect number of clusters is 4. However, we choose K = 5 for all hard clustering approaches because we have five different variants in our data (see Table 1). The KPDA chose four as the best initial number of clusters due to the Beta variant being not well-represented in the data (see Table 1). However, to give a fair chance to the Beta variant to form its own cluster, we choose 5 as the number of clusters.

Experimental Setup
In this section, first, we provide information associated to the dataset. Then, with the benefit of the t-distributed stochastic neighbor embedding (t-SNE) [57], we try to reduce dimensions with non-linear relationships to find any natural hidden clustering in the data. This data analysis step helps us to obtain basic knowledge about different variants. As a baseline, we use k-mers based frequency vectors without applying any feature selection to perform clustering using k-means, k-modes, fuzzy, hierarchical, and Density-based spatial (HDBSCAN) algorithms. The weighted F 1 score is used to measure the quality of clustering algorithms for different experimental settings. All experiments are performed on a Core i5 system running the Windows operating system, 32GB memory, and a 2.4 GHz processor. Implementation of the algorithms is done in Python, and the code is available online 1 . Our pre-processed data is also available online 2 , which can be used after agreeing to terms and conditions of GISAID 3 . The code of HDBSCAN is taken from [54]. The code for fuzzy c-means is also available online 4 .

Dataset Statistics
Our dataset is the (aligned) amino acid sequences (spike region only) of the SARS-CoV-2 proteome. The dataset is publicly available on the GISAID website 5 , which is the largest known database of SARS-CoV-2 sequences. Table 1 is showing more information related to the dataset. There are five most common variants namely Alpha, Beta, Delta, Gamma, and Epsilon. The forth column of Table 1 shows number of mutations occurred in spike protein over the number of total mutations (in whole genome) for each variant, e.g,. for Alpha variant there are 17 mutations in the whole genome and 8 of the mutations are in spike region out of those 17

Data Visualization
By using the t-SNE approach, we plotted the data to 2D real vectors to find any hidden clustering in the data. Figure 4 (a) shows the t-SNE plot for the GISAID dataset (before applying any feature selection). Scattered different variants everywhere is clearly visualized. Even though we cannot see clear separate clusters for each of those variants, small clusters are obvious for different variants. This evaluation for such data reveals using any clustering algorithm directly will give us good results, and some data preprocessing is curtailed for clustering the variants efficiently.
By visualizing the GISAID dataset using t-SNE, more clear clusters are visible after applying three different feature selection methods. In Figure 4 (b)(c)(d), we apply different feature selection methods, namely Boruta, Lasso, and RFF, respectively. We can observe that the clustering is more pure for Boruta and Lasso regression but not for RFF. This behavior shows that the supervised methods (Lasso regression and Boruta) are able to preserve the patterns in the data more effectively as compared to the unsupervised RFF.

Results and Discussion
In this section, we report the results for all clustering approaches without and with feature selection methods. We use the weighted F 1 score to compute the goodness of a clustering. Since we do not have labels available for clusters, we label each cluster using the variant that have most of its sequences in that cluster (e.g., we give the label 'Alpha' to that cluster if most of the sequences belong to the Alpha variant). Now, we calculate the F 1 score (weighted) for each cluster individually using these given labels.
For different methods, the weighted F 1 scores are provided in Table 2. Note that we did not mentioned the F 1 scores for HDBSCAN since it is an overlapping clustering approach. From the results, we can observe that Lasso regression is more consistent as compared to Boruta to efficiently cluster all variants. One interesting pattern we can observe is the pure clusters of some variants in case of RFF. It shows that RFF is able to cluster some variants very efficiently. However, it fails to generalize over all variants. In terms of different clustering methods, k-means and k-modes are performing better and able to generalize more on all variants as compared to the other clustering methods.

Contingency Tables
After evaluating the clustering methods using weighted F 1 scores, we compute the contingency tables for variants versus clusters for different clustering approaches. The contingency tables for different clustering methods and feature selection approaches is given in Table 3 to Tables 10. In Table3, we can observe that k-modes without applying any feature selection is outperforming k-means and also the other two clustering algorithms from Table 4. In Table 5 and Table 6, we can observe that RFF is giving pure clusters for some of the variants but performing poor on the other variants. Lasso regression in Table 7 and Table 8, we can observe that clusters started to become pure immediately when we apply lasso regression. This shows the effectiveness of this feature selection method for the clustering of spike sequences. Similarly, in Table 9 and Table 10, we can see that Boruta is not giving many pure clusters (apart from k-modes). This shows that Boruta fails to generalize over different clustering approaches and different variants.

HDBSCAN Clustering
After doing analysis on hard clustering algorithms, we evaluate the performance of the soft clustering approach (HDBSCAN) in this section. To evaluate HDBSCAN, we use the t-SNE approach to plot the original variants from our data and compared them with clusters we obtained after applying HDBSCAN.

K-means(Cluster IDs)
K-modes (Cluster IDs)  Variant  0  1  2  3  4  0  1  2  3  4   Alpha  8762  86  2925 680 1513 11403  7  184  1823  549  Beta  601  33  626  172  295  6  6  640  1060  15  Epsilon  7848  187 3155 638  956  1  0  11170  947  666  Delta  2605  30  1342 868 2706  0  0  2894  690  3967  Gamma 22140  50  3016 741  682  6  25428  6 1128 61  Since this is a soft clustering approach (overlapping allowed), there were large number of clusters inferred for different feature selection methods. Therefore we use t-SNE to plot the clusters to visually observe the patters before and after clustering. Figure 5 shows the comparison on t-SNE plot on original data versus t-SNE plots for the clustering results after applying HDBSCAN. Since overlapping is allowed in this setting, we cannot see any pure clusters as compared to the original t-SNE plot. An interesting finding from such result is that not all sequences corresponding to a specific variant are similar to each other. This means that a small cluster of sequences, that initially belong to a certain variant can make another subgroup, which could eventually lead to developing a new variant. Therefore, using such overlapping clustering approach, we can visually observe if a group of sequences are diverging from its parent variant. Biologists and other decision making authorities can then take relevant measure to deal with such scenarios. The t-SNE plots for different feature selection methods in given in Figure 6.

Runtime Comparison
After applying different clustering methods and feature selection algorithms on the spike sequences, we observe that k-means and k-modes are performing better than the other clustering methods in terms of weighted F 1 score. However, it is also important to study the effect of runtime for these clustering approaches so that we can evaluate the trade-off between F 1 score and the runtime. For this purpose, we compute the runtime of different clustering algorithms without and with feature selection methods. Figure 7 shows the runtime for all five clustering methods without applying any feature selection on the data. We can observe that k-modes is very expensive in terms of runtime and k-means takes the least amount of time to execute. Similar behavior is observed in Figure 8, Figure 9, and Figure 10 for RFF, Boruta, and Lasso regression, respectively. This behavior shows that although k-modes is performing better in terms of F 1 score, it is an outlier in terms of runtime. This behavior also shows the effectiveness of the k-means algorithm in terms of F 1 score and also in terms of runtime.

Conclusion
We propose a feature vector representation and a set of feature selection methods to eliminate the less important features, allowing many different clustering methods to successfully cluster SARS-CoV-2 spike  protein sequences with high F 1 scores. We show that runtime is also an important factor while clustering the coronavirus spike sequences. The k-means algorithm is able to generalize over all variants in terms of doing pure clustering and also consuming the least amount of runtime. One possible future work is to use more data for the analysis. Testing out additional clustering methods could be another direction. Using deep learning on even bigger data could give us some interesting insights. Another interesting extension is to compute other feature vector representations, e.g., based on minimizers, which can be done without the need for aligning the sequences. This would allow us to use all of this clustering machinery to study unaligned (even unassembled) sequencing reads of intra-host viral populations, to unveil the interesting dynamics at this scale.