AC2: An Efficient Protein Sequence Compression Tool Using Artificial Neural Networks and Cache-Hash Models

Recently, the scientific community has witnessed a substantial increase in the generation of protein sequence data, triggering emergent challenges of increasing importance, namely efficient storage and improved data analysis. For both applications, data compression is a straightforward solution. However, in the literature, the number of specific protein sequence compressors is relatively low. Moreover, these specialized compressors marginally improve the compression ratio over the best general-purpose compressors. In this paper, we present AC2, a new lossless data compressor for protein (or amino acid) sequences. AC2 uses a neural network to mix experts with a stacked generalization approach and individual cache-hash memory models to the highest-context orders. Compared to the previous compressor (AC), we show gains of 2–9% and 6–7% in reference-free and reference-based modes, respectively. These gains come at the cost of three times slower computations. AC2 also improves memory usage against AC, with requirements about seven times lower, without being affected by the sequences’ input size. As an analysis application, we use AC2 to measure the similarity between each SARS-CoV-2 protein sequence with each viral protein sequence from the whole UniProt database. The results consistently show higher similarity to the pangolin coronavirus, followed by the bat and human coronaviruses, contributing with critical results to a current controversial subject. AC2 is available for free download under GPLv3 license.


Introduction
One of the most demanding challenges in data compression is related to the lossless compression of protein (or amino acid) sequences. These sequences' origins follow the gene expression process, from DNA to RNA, to make a functional product: a protein.
The first phase is transcription, where the information in every cell's DNA, possibly noncontiguous, is converted into small, portable RNA messages. Symbolically, only the T symbol is transcripted in U from the 4-symbol DNA alphabet (A, C, G, T). The second phase is the translation, where each triplet of RNA is encoded into one of the twenty possible amino acids. Here, it is essential to remember that a different triplet can create the same amino acid and, hence, it is a lossy encoding process. Finally, a specific chain or set of chains of amino acids establishes a protein.
Although proteins have a three-dimensional (3D) structure that reshapes over time, they are usually represented in FASTA files as a (static) 1D string of characters. Therefore, structural correlations between similar parts need to be modeled to extend compression gains. These involve the beta-pleated sheets, alfa-helix, side-chain interactions, and possible combinations of multiple amino acid chains [1]. Recent developments in protein folding have shown that high structural prediction can be achieved from the DNA source [2].
However, for coding applications, time constraints and the use of side information provides additional challenges. In addition, extra challenges are presented when modeling protein sequences without information of the DNA sequence (which is the case in this article) and, hence, relying on the product of a process that may contain errors and imprecision at several phases. Therefore, data compressors need to consider protein sequences containing extra symbols to represent ambiguity, error, indecision, or incompleteness [3].
The main purpose of data compression is to reduce storage and increase transmission efficiency, specifically requiring high speed and efficient data compressors [4]. A growing application for data compression is data analysis, for example in Bioinformatics [5]. In this case, the main focus is on the maximum compression ratio, although the improvement in data compressors' speed is valuable, particularly for large-scale data analysis [6]. As an analysis tool in the genomics and proteomics fields, data compression has been used in many applications [7], for example, to estimate the Kolmogorov complexity of sequences [8,9], classification [10,11], phylogenomic and phylogenetic analysis [12,13], information retrieval [14], variation and rearrangement detection [15,16], structural analysis [17,18], pan-genome analysis [19], metagenomics [20], detection of DNA-binding proteins [21], and domain composition studies [22,23].
In the literature, the use of specialized protein sequence compressors is low compared with that of DNA sequence compressors, mainly because specialized programs, similar to those that model the inverted repeats in DNA, are much harder to design, given the substantial higher uncertainty and lower specific sequence patterns [24]. Therefore, high ratio general-purpose data compressors are very close to the specialized category.
In the specialized category, the ProtComp [25] exploits approximate repeats and uses a hybrid method combining a substitution approach using Huffman coding and a first-order Markov model with arithmetic coding. ProtCompSecS [26], adds to ProtComp a dictionarybased method to encode the secondary information related to proteins. The algorithm presented in [27] uses the Burrows-Wheeler transform and the sorted common prefix combined with substitutions to exploit sequence long-range correlation.
In 2007, Benedetto et al. show that models that consider short and medium size correlation were more likely to achieve higher compression rates [28]. This characteristic was applied in XM through the combination of expert models with short and medium size, namely repeat and context models [29]. A fusion of dictionary and sequence alignment methods for the compression of protein databases was proposed in CaBLASTP [30]. This algorithm searches for solid sequence alignments, and when one exists, it stores an index instead of the sequence. In [22], a heuristic approach was proposed to transform a hypergraph representing the proteome into a minimum spanning hypertree. In 2017, CAD [31] was proposed, relying on an adaptive dictionary with Huffman coding.
More recently, the challenge of protein sequence compression has been revisited, namely with the proposal of AC, NAF, and CoMSA. Specifically, the AC tool [9,32] uses an ensemble of Markov models (finite context and substitution tolerant context models) with adaptive weights per model and arithmetic encoder. The NAF tool [33] uses a 4-bit encoding followed by its compression with general-purpose compressor zstd (https://github.com/facebook/zstd, accessed on 23 April 2021). The challenge of data compression in aligned data gain momentum with CoMSA [34], a compression tool using the generalization of the positional Burrows-Wheeler transform for non-binary alphabets. In the natural sequence domain, interesting approaches using prediction-based compression through the decoupling of the context model from the frequency model have been proposed [35].
In this article, we describe AC2, an evolution of the AC compressor. Contrarily to AC, AC2 uses a neural network to mix the experts and memory caches for the models with high context orders. Specifically, AC2 takes a meta-learning approach to the mixture of experts [36]. We use a neural network with input of the probabilities of each amino acid given by each Markov model. As additional inputs, we derive other features to improve the accuracy of the network. As outputs, the network uses one node per amino acid.
Each output node gives the final probability for the corresponding amino acid. We use a small multilayer perceptron for the neural network, which is trained online for each new symbol in the sequence. Moreover, to reduce the model's memory, specifically in the compression of protein sequence collections, we develop cache-hashes for the highest-order context models.
The main contributions of this paper can be summarized by the following points: The next sections of this article present the implementation details of AC2, the comprehensive benchmark results, including several protein sequences with different characteristics, and state-of-the-art compressors. Additionally, we provide compression-based analysis examples employing the AC2 compressor.

Methods
This section presents the details of the AC2 compressor methodology. AC2 uses identical models as AC, namely a combination of context models and substitution tolerant context models of several order depths. The usage of substitution tolerant models in biological sequences is crucial because they provide a solid improvement factor over highratio general-purpose data compressors and, hence, are models that can be considered of specific biological nature [37,38].
The most significant developments of AC2 are a neural network to augment the expert mixing and an individual memory cache-hash for the models with the highest context orders. These implementations allow AC2 to improve the compression ratio while reducing the memory usage substantially. In the following subsections, we provide details on the neural network, cache-hash and counter precision, and parameters used.

Selecting a Neural Network Type
In this subsection, we review the literature related to selecting suitable artificial neural networks for sequence prediction, namely, to incorporate an appropriate network into the proposed data compressor. The computational resources required and the feasibility of the network integration in the data compressor are essential because we are concerned with the network accuracy and efficiency.
Although we searched explicitly for literature that compared different architecture networks in protein sequence compression, we were unable to find a single work. Nevertheless, we found an analogous work, specifically, DNA sequence compression [39]. This analogous work describes a compressor based on neural networks and compares two types of recurrent neural networks (RNNs) and a multilayer perceptron (MLP). Overall, the RNNs provide the best compression rate, although there is a dependency on the dataset. Moreover, this method uses a top-of-the-line GPU, consuming several hours to compress sequences with 10MB length. Because performing a compression-based analysis in an extensive protein sequence dataset (with gigabyte length) is one of the applications of our work, speed is critical; therefore, this limits the usage of this methodology. We also found benchmarks of neural networks to time series prediction problems; these seem to fit with the stochastic nature of the issue we are analyzing [40].
In terms of computational resources, we intuitively deduce that the multilayer perceptron (MLP) would have the best performance because of its elementary nature; this is supported by [41], where we can notice that even with large networks, the MLP is the fastest network by approximately 50%, and the convolutional is the second fastest, followed by the recurrent networks.
In terms of accuracy, the performance appears to be very dependent on the dataset. In [42], a multilayer perceptron (MLP), a convolutional neural network (CNN), a recurrent neural network (RNN), and a long short term memory (LSTM) are used to predict the values of the stock market. The results favor the CNN followed by the MLP, with the LSTM and the RNN trailing behind. In diverse datasets, the MLP is superior to the CNN. In [41], an MLP, a CNN, and an LSTM are compared using several datasets. Overall, the CNN performs best, followed by the LSTM and then the MLP. As in the previous comparison, there is no single network that achieves the best result in all datasets. In [43], the authors compare an LSTM to an MLP and conclude that the MLP has performance equal to or better than that of the LSTM. In [44], several neural networks and datasets are compared; among these are various types of CNNs and an MLP. Two types of CNNs, the residual neural network (ResNet) [45] and the fully convolutional neural network (FCN) [46], provide the best overall accuracy, with the MLP placing fourth out of the nine evaluated networks. In [47], a hybrid network combining a CNN and an LSTM is used to predict power consumption, stock values, and gas concentrations. In some datasets, the CNN has better predictions than the LSTM. The proposed hybrid approach always presents better predictions in the three datasets. In [48], two neuro-fuzzy networks are compared with an MLP to model the reference evapotranspiration. The neuro-fuzzy networks displayed higher accuracy than the MLP. Finally, in [49], we see a comparison of a neuro-like structure with a sequential geometric transformations model (SGTM) [50] and an MLP. The SGTM has better accuracy and spends less time during the training phase.
No network appears to be the best in all datasets, as anticipated by the results of the no free lunch theorem [51].

Neural Network Architecture
Based on the above literature review, we elected to use the MLP, which confers accurate and efficient predictions [42]. Specifically, this network is one of the most resource-efficient neural networks in execution time and memory usage. Furthermore, it has demonstrated high performance in other tasks also using biological sequences [52] and is straightforward to implement and validate.
Specifically, the network has a single hidden layer, where all nodes have the sigmoid activation function. The output layer uses the softmax activation, ensuring that all output nodes sum to one, and the loss function is the cross-entropy. The network has two bias nodes, one in the input layer and one in the hidden layer. The weights are set according to the Xavier initialization [53]. Figure 1 depicts a high-level overview of the mixer architecture, including its inputs. The inputs to the network consist of the outputs of the context models and substitution tolerant context models and the output of the mixing done in the AC compressor. Therefore, we do not substitute the mixing done in AC, but instead, we are augmenting it. In other words, the mixing performed in AC is considered as another model. We transform these probabilities by subtracting 1 n , where n is the number of unique symbols in the sequence. After the subtraction, we multiply the result by five in the case of a model and ten in the AC mixer output case. These types of transformations and their motivation are explained in [54].

Neural Network Inputs
Moreover, we use derived features with the mean symbol frequencies for the last 8, 16, and 64 symbols; these are also multiplied by five. Finally, we use an exponential moving average for all symbols, such that when a symbol occurs, the average for symbol i is updated according to If symbol i does not occur, then the update rule is

Neural Network Outputs and Training
For the outputs, we use one node per amino acid; each one has the probability for that symbol. These are the values that are passed to the arithmetic encoder. The network is trained online for each new symbol. The training vector is filled with zeroes except for the position corresponding to the symbol that occurred. This position has a value of one. The training algorithm is the stochastic gradient descent without momentum [55].

Neural Network Pre-Training and Selection Heuristic
We noticed that the neural network initialized with the random values had less accurate predictions at the beginning of the sequence. We improved the network, pretraining it by activating the same symbol in all models with a value of one. Moreover, the same symbol is used for training. Essentially, this forces the bias that if all models agree on the same symbol with absolute certainty, then the output is forced to be that symbol.
Additionally, we added a heuristic that selects between the AC mixer and the neural network output. The mixer used is the best performing one. This choice is determined by an exponential moving average of the number of estimated bits produced by the two mixers.

Cache-Hash and Counter Precision
One of the significant limitations of AC is the substantial increase of RAM provided with the combination of high-context orders and large sequence sizes. For example, when the sequences to compress are larger than, say, 200 MB and AC use context orders higher than 7, the RAM increases to values that regular laptops can not support. To resolve this issue, AC2 uses cache-hash memory models.
A cache-hash [56] enables storing in memory only the latest entries up to a certain number of hash collisions. This model enables the use of deep context orders with very sparse representations. If AC2 stored its entries in a table, this would require |A| k+1 entries, where |A| is the size of the protein sequence alphabet and k the context order of the model; this means that assuming counters of 8 bits for a k = 10 and an |A| of 20 would require 186 TB of RAM. For small sequences, a linear hash would be feasible, depending on the available RAM. However, for large sequences, this becomes unfeasible.
Therefore, AC2 uses a cache-hash for each high context order model to remove space constraints. AC2 uses a parameter that represents the maximum number of collisions, enabling a constant maximum peak of RAM. Moreover, we reduced the size of the counters for the models. AC2 now uses two bits per symbol, unlike eight bits in AC. The practical outcomes are speed and RAM improvements, enabling the compression of extensive sequences, specifically large collections of protein sequences. The disadvantage is the slightly higher code complexity of AC2.

Parameters and Optimization
In addition to the AC parameters, AC2 includes parameters to control the learning rate, the number of nodes in the hidden layer, and the cache's size per model. AC2 also adds a more powerful compression level.
All the internal and external parameters were determined empirically. These include the coefficients for the exponential moving averages, the window size for the moving averages, the input transformations, the learning rates, the hidden layer, and the number of models and their parameters. The internal parameters are fixed for all experiments, while the external parameters were adjusted for each sequence; the parameters used are available in the same repository as the source code.

Results
In this section, we evaluate the performance and accuracy of AC2 as a protein sequence compressor in two benchmarks, namely in reference-free and reference-based compression. AC2 is available for free download (GPv3 license) at https://github.com/cobilab/ac2, accessed on accessed on 23 April 2021.
• DS2: a comprehensive dataset (proposed in [9]), containing the following sequences: For benchmarking AC2 as a reference-based compressor, we used four complete proteomes of four primates (human, gorilla, chimpanzee, and orangutan) with a pairwise chromosomal compression. For each chromosomal pair, the following compression was performed: • Chimpanzee (PT) using human (HS) as a reference; • Gorilla (GG) using human (HS) as a reference; • Orangutan (PA) using human (HS) as a reference.
Unless otherwise stated, the benchmarks were performed on an Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz running Linux 5.4.0 with the scaling governor set to performance and 32GB of RAM.

Reference-Free Compression Benchmark
For the comparison with AC2, we selected two specialized protein compressors: AC and NAF. These are the only available and working compressors we could find; therefore, we added general-purpose compressors to make a comprehensive comparison. The generalpurpose compressors added are the Big Block BWT (Burrows-Wheeler transform) [61], the LZMA [62], and the CMIX [63].
It should be noted that the bits per symbol (bps) are for the resulting archive file, which in the case of AC2 includes a header with parameters to describe the models and the network parameters. For small sequences, this is a significant portion of the final size. If we ignore the header size, which makes sense when an analysis is a primary goal, AC2 achieves even more significant gains than AC. Even with this disadvantage, AC2 shows a better compression ratio than AC for all sequences tested.
In terms of memory usage, AC2 uses substantially less RAM than AC, and the cache size parameters limit the increase in RAM. In challenging cases, such as the UniProt sequence's compression, the memory usage of AC2 is approximately 86% less than that of AC (111 GB to 16 GB). With the present configuration, the AC2 uses less memory than the two closest higher compression ratio compressors, CMIX and AC. For the UniProt sequence, the RAM usage is 660 MB, 740 MB, 3 GB, 16 GB, 19 GB, and 111 GB for LZMA, BBB, NAF, AC2, CMIX, and AC, respectively. Parameters control the memory requirements of AC2; thus, the needed memory can be decreased with a penalty of a small precision payoff, providing utility to computers with lower computational resources.
The computational time of AC2 is ≈3× slower than of AC, but it is ≈7-19× faster than CMIX. Compared to the other compressors (BBB, LZMA, and NAF), AC2 is ≈15-49× slower. Compressing the UniProt sequence with AC had to be done in a different machine with more RAM but a slower processor; this is why the execution was slower. While AC2 is slower than AC, this gap should decrease soon due to the inclusion of specialized instructions and data types in general-purpose consumer CPUs [64,65]. The parameters used for this benchmark focus on maximum compression ratio, but we can decrease the execution time while maintaining the best ratios. For example, in the PDBaa sequence, we can reduce the number of hidden nodes from 128 to 40, which gives us 1.725 bps at half the execution time (10 min). Figure 2 depicts the difference between the AC and the AC2 compression performance, described as gain complexity profiles. Gain complexity profiles are numerical representations of the gain in terms of bits per symbol for each sequence element. Using the GTO toolkit [66], we applied a low-pass filter to the gain complexity profiles to smooth the peaks and valleys and better perceive the trends. Table 1. The bits per symbol (bps) and time needed to represent an amino acid sequence for BBB, LZMA, CMIX, NAF, AC and AC2. NAF uses the highest compression level (22) for all sequences. BBB uses the parameters 'cfm100q' for all sequences. LZMA uses the highest level (-9 -e) for all sequences. For DS2, AC and AC2 use the same levels as in [9]. For DS1, the models used by AC are '-tm 1:1:0.

c)
Bps Gain We can see the heuristic effect of switching between the new and the old mixer for small sequences, such as the XV and FV. Flat regions, with the number of bits per symbol equal to zero, corresponds to regions where the old mixer is used. It is also visible for smaller sequences that even with the heuristic and the pre-training, the AC2 mixer sporadically produces lower results than the AC mixer. These are shown in the plot when the graph has negative values. This situation is due to the lag associated with the exponential moving average that controls the mixer switching. The lag can be reduced, and for small sequences, the compression does benefit; the reverse is true for large sequences. For extensive sequences (HS), we can see that the plot is always positive with this smoothness level. In this case, AC2 appears to compress consistently more.

HS -Homo sapiens
Finally, all plots show small peaks of at most 0.4 Bps. On the one hand, this is due to the smoothness function applied. On the other hand, it shows no large regions (as a percentage of the total sequence) where AC2 is vastly superior to AC. Even so, there appear to be new regions of interest that could provide new insights into the sequences' nature.

Reference-Based Compression Benchmark
In this section, we compare AC with AC2 for the compression of proteins using a reference because, as far as we know, AC it is the only data compression tool (currently working) for reference-based protein sequence compression. AC has some reference-based compression errors related to the incapability to deal with a different alphabet between reference and target sequences. Therefore, we improved AC2 to output the AC compression estimates, using the AC mixer's probabilities to calculate the expected number of bits with − log 2 (p sym ). The results in Table 2 show that AC2 improves the compression ratio by 6-7% compared to AC. Chromosomes 5 and 17 of the gorilla show the least improvement and also the worst bits per symbol. This performance is due to a hypothetical rearrangement in the gorilla that diverges from the other three primates [67]. In practice, a similar part of one of the chromosomes is present in the other, decreasing the capability of using the reference sequences as an auxiliary input [68]. A way to improve these particular sequences' compression would be to use the references of both proteomic sequences from chromosomes 5 and 17 of the human.  The plots in Figure 3 show a similar trend to the plots from the reference-free compression. For the mitochondrion sequence, we can see the effects of the heuristic switching between the AC and neural network mixer. For the larger sequences, we can see more consistently positive values.  Table 2.

Application: SARS-CoV-2 Protein Sequence Similarity to Other Viral Proteins
As an example of identifying similar protein sequences in terms of quantity of information, we studied the most similar protein sequences, in the whole UniProt database, to the proteins of the human Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [69], respecting important Bioinformatics guidelines [70].
SARS-CoV-2 is a positive-sense single-strand RNA virus with an origin traced to a food market in Wuhan, China, in December 2019 that can cause COVID-19 disease [71]. SARS-CoV-2 is transmitted by inhalation or contact with infected droplets with an incubation period from 2 to 14 days [72]. According to the World Health Organization (WHO), SARS-CoV-2 has already caused more than 146 million infections and 3.1 million deaths, where the variation of the latter seems to be related to seasonality [73]. New developments of fast diagnostic methods are emerging, for example, a 10-min antibody assay [74], enabling reacting and predicting infection and vaccine responses much faster.
Although several therapeutics have already been proposed [75,76], the emergence of multiple variants brings additional complexity to the challenge both in diagnostics and therapeutics [76,77]. Much has been learned with the current pandemic; however, much progress is still required. One of the inconclusive themes is related to SARS-CoV-2 protein sequence similarity to other viral protein sequences. Despite several studies addressing this topic both at genomic and proteomic level [78,79], different interpretations have been provided at the animal host origins [80][81][82]. Perhaps the reason is related to the characteristics of the measures used, namely the use of normalized scores applying alignments that do not rigorously consider quantities of information, overestimation issues, and the respect of distance properties, such as symmetry and idempotency [83]. Without respecting the theoretical foundations that characterize an information distance (or distance of dissimilarity), a problematic question arises: where should one draw the threshold or the splitting line? Therefore, defining thresholds in these cases can substantially contradict conclusions, not because one of the measures is incorrect but because the starting point or assumptions are fundamentally different.
Accordingly, in this paper, we present the SARS-CoV-2 protein sequence similarity to other viral protein sequences relying on AC2, the data compressor benchmarked with the best-known compression ratios (shown in this paper above) and through the computation of the Normalized Compression Distance (NCD) [84].
For this purpose, we separated all protein sequences in the UniProt database and all proteins of the SARS-CoV-2 into different classes. Then, we measured the (dis)similarity across each pair of elements of the classes using the NCD through the approximation of the conditional complexity [85] as For approximating the complexity (C(x) and C(y)) and conditional complexity (C(x|y) and C(y|x)), we used AC2 with optimized parameters for each type of compression as follows:  Figure 4 depicts the results with the lowest NCD and, hence, the most similar sequences according to a reference SARS-CoV-2. As presented in the architecture of Figure 4d,g, several protein sequences can be localized, namely the (Open Reading Frame) ORF1ab, spike (S), envelope (E), membrane (M), and nucleocapsid (N). The ORF1ab includes ORF1a and ORF1b, which characterize a non-structural polyprotein involved in the transcription and replication of viral RNAs, containing the proteinases responsible for the protein's cleavages. ORF3 is an accessory protein specialized for environment change inside the infected cell, through the membrane's rupture, increasing the virus replication. The membrane (M) is a structural protein that forms part of the virus's outer coat, playing a crucial role in virus morphogenesis and assembly via its interactions with other viral proteins. The nucleocapsid protein (N) is a structural protein that packages the RNA into a helical ribonucleocapsid (RNP) and is essential during assembly through its interplays with the viral genome and membrane protein (M). It also magnifies the efficiency of subgenomic viral RNA transcription and replication [86].
According to the remain of Figure 4, the NCD for the flagged proteins is consistently lower for pangolin coronavirus, followed by the ranked alternation between multiple bats and human coronavirus. In humans, the highest similarities stand for MERS [87] and SARS [88] coronaviruses, naturally showing the evolution under the host. The pangolin and bat coronaviruses with higher similarities to the SARS-CoV-2 are in accordance with some studies of both origin species of SARS-CoV-2 [89], measured at the genomic [78] and proteomic level [79]. Moreover, the results show that the pangolin coronaviruses are the most similar in terms of information (Kolmogorov complexity [90]). Furthermore, the last protein sequence (marked with an X in Figure 4), also known as ORF10, shows only relevant similarity according to the pangolin coronavirus. Despite the consistency of the results provided at the proteomic level, the discovery of new proteomes with higher similarity (or lower NCD) to the SARS-CoV-2 may change the conclusions.

Conclusions
This paper describes AC2, a new protein sequence compressor that uses a neural network to mix experts with a stacked generalization approach and individual cachehash memory models to the highest context orders. We show gains over the previous compressor (AC) between 2 and 9%, depending on the dataset characteristics. These gains come at the cost of slower execution times ≈3×. AC2 substantially improves the memory usage compared to AC, with memory usages about 7× lower. Compared to the previous best available state-of-the-art compressors, AC2 achieves an overall compression ratio improvement of approximately 2% and 6% in reference-free and reference-based modes, respectively. Nevertheless, we think that AC2 can still be improved in singleorganism proteome compression. For example, to address this challenge, we can derive other experts that model the secondary information of the proteins, similar to the algorithm in [26]. Another crucial area of improvement has to do with the computational resources, as these may limit the efficiency of analysis. To improve the execution speed, we can drive computations to a GPU, with the neural network as the most likely candidate to benefit. Furthermore, different caching strategies directly applied to the models may reduce the memory requirements while bringing some improvements.
Additionally, we provided an application of the usage of AC2 for comparative proteomic analysis, namely measuring the similarity between each SARS-CoV-2 viral protein sequence with each viral protein sequence from the whole UniProt database. This straightforward alignment-free solution infers the most similar proteomic sequence using very flexible, balanced, and consistent measures. According to the eventual redundancy in the sequences, alignment-based measures may provide overestimated results, given its small size and ambiguous choices. On the other hand, our approach quantifies the similarity using information without overestimation (a property of using data compression through the NCD). Moreover, it uses multiple experts of different nature in an unsupervised learning approach. This characteristic means that the data compressor can use models of another nature, for example, energy, structural, or algorithmic models [91], to combine the predictions besides simple vertical amino acid comparison. In this paper, the results consistently show higher similarity to the pangolin coronavirus in the provided application, followed by the bat and other human coronaviruses. However, as with any other known comparative methods, this approach has a drawback: discovering new proteomes with higher similarity to the SARS-CoV-2 may change the conclusions.