Prediction of Protein–Protein Interaction with Pairwise Kernel Support Vector Machine

Protein–protein interactions (PPIs) play a key role in many cellular processes. Unfortunately, the experimental methods currently used to identify PPIs are both time-consuming and expensive. These obstacles could be overcome by developing computational approaches to predict PPIs. Here, we report two methods of amino acids feature extraction: (i) distance frequency with PCA reducing the dimension (DFPCA) and (ii) amino acid index distribution (AAID) representing the protein sequences. In order to obtain the most robust and reliable results for PPI prediction, pairwise kernel function and support vector machines (SVM) were employed to avoid the concatenation order of two feature vectors generated with two proteins. The highest prediction accuracies of AAID and DFPCA were 94% and 93.96%, respectively, using the 10 CV test, and the results of pairwise radial basis kernel function are considerably improved over those based on radial basis kernel function. Overall, the PPI prediction tool, termed PPI-PKSVM, which is freely available at http://159.226.118.31/PPI/index.html, promises to become useful in such areas as bio-analysis and drug development.


Introduction
Protein-protein interactions (PPIs) play an important role in such biological processes as host immune response, the regulation of enzymes, signal transduction and mediating cell adhesion.Understanding PPIs will bring more insight to disease etiology at the molecular level and potentially simplify the discovery of novel drug targets [1].Information about protein-protein interactions have also been used to address many biological important problems [2][3][4][5], such as prediction of protein function [2], regulatory pathways [3], signal propagation during colorectal cancer progression [4], and identification of colorectal cancer related genes [5].Experimental methods of identifying PPIs can be roughly categorized into low-and high-throughput methods [6].However, PPI data obtained from low-throughput methods only cover a small fraction of the complete PPI network, and high-throughput methods often produce a high frequency of false PPI information [7].Moreover, experimental methods are expensive, time-consuming and labor-intensive.The development of reliable computational methods to facilitate the identification of PPIs could overcome these obstacles.
Thus far, a number of computational approaches have been developed for the large-scale prediction of PPIs based on protein sequence, structure and evolutionary relationship in complete genomes.These methods can be roughly categorized into those that are genomic-based [8,9], structure-based [10], and sequence-based [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26].Genomic-and structure-based methods cannot be implemented if prior information about the proteins is not available.Sequence-based methods are more universal, but they concatenate the two feature vectors of protein P a and P b to represent the protein pair P a -P b , and the concatenation order of two feature vectors will affect the prediction results.For example, if we use feature vectors , Furthermore, PPIs have a symmetrical character; that is, the interaction of protein P a with protein P b equals the interaction of protein P b with protein P a .Under these circumstances, concatenating two feature vectors of protein P a and P b to represent the protein pair P a -P b and then using the traditional kernel

The Results of DFPCA and AADI with K II Pairwise Kernel Function SVM
In statistical prediction, the following three cross-validation methods are often used to examine a predictor for its effectiveness in practical application: independent dataset test, K-fold crossover or subsampling test, and jackknife test [28].However, of the three test methods, the jackknife test is deemed the least arbitrary that can always yield a unique result for a given benchmark dataset as demonstrated by Equations ( 28)- (30) in [29].Accordingly, the jackknife test has been increasingly and widely used by investigators to examine the quality of various predictors (see, e.g., [30][31][32][33][34][35][36][37][38][39][40][41]).However, to reduce the computational time, we adopted the 10-fold cross-validation (10 CV) test in this study as done by many investigators with SVM as the prediction engine.
The four feature vector sets, Hf, Vf, Pf, and Zf, extracted with DFPCA and the five feature vector sets, LEWP710101, QIAN880138, NADH010104, NAGK730103 and AURR980116, extracted with AAID were employed as the input feature vectors for K II pairwise radial basis kernel function (PRBF) SVM.The results of DFPCA and AAID are summarized in Table 1.From Table 1, we can see that the performances of the two feature extraction approaches, i.e., amino acid distance frequency with PCA (DFPCA) and amino acid index distribution (AAID), are nearly equal when using the K II pairwise kernel SVM.The total prediction accuracies are 93.69%~94%.As previously noted, we used just five amino acid indices, including LEWP710101, QIAN880138, NADH010104, NAGK730103 and AURR980116, to produce the feature vector sets.When we tested the performance of AAID against the remaining 480 amino acid indices from AAindex, we found that the amino acid index does affect predictive results and that the total prediction accuracies of those amino acid indices were 79.4%~94%.Among our original five indices, as noted above, the performance of AAID was superior in comparison to the results from AAindex.To account for the better performance of our five indices, we point to the physicochemical and biochemical properties of amino acids.By single-linkage clustering, one of agglomerative hierarchical clustering methods, Tomii and Kanehisa [42] divided the minimum spanning of these amino acid indices into six regions: α and turn propensities, β propensity, amino acid composition, hydrophobicity, physicochemical properties, and other properties.The indices of LEWP710101, QIAN880138, NAGK730103 and AURR980116 are arranged into the region of α and turn propensities, while NADH010104 is arranged into the hydrophobicity region, indicating that the properties of α and turn propensities, and hydrophobicity contain more distinguishable information for predicting PPIs.

The Comparison of Pairwise Kernel Function with Traditional Kernel Function
In order to evaluate the performance of pairwise kernel function, we compared the results of pairwise radial basis kernel function (PRBF) and radial basis function kernel (RBF) with the same feature vector sets.For RBF, we concatenate the two feature vectors of protein P a and protein P b to represent the protein pair P a -P b ; that is, feature vector ab a b x x x = ⊕ was used as the input feature vector of RBF.The results of RBF and PRBF with DFPCA in the 10CV test are listed in Table 2. Table 2 shows that the performance of PRBF is superior to that of RBF for predicting PPI.The total prediction accuracies of PRBF are higher at 3.9%~4.48%than those of RBF.

The Comparison of DF and DFPCA Feature Extraction Approaches
For the feature extraction approach of distance frequency of amino acids grouped with their physicochemical properties, we compared the results of DF and DFPCA with PRBF SVM to test the validity of adopting PCA.The reduced feature matrix is set to retain 99.9% information of the original feature matrix by PCA.The results of DF and DFPCA with PRBF SVM in the 10CV test are listed in Table 3. From Table 3, we can see that the performance of DFPCA is superior to that of DF.The total prediction accuracies and MCC (see Equation ( 16) below) of DFPCA are 15.79%~24.43% and 0.2705~0.4067higher than those of DF, respectively.Although the sensitivities of DF are a little higher (1.43%~1.59%)than those of DFPCA for the Hf, Vf, Pf and Zf feature sets, the positive predictive values are much less than that of DFPCA (21%~29%), which means that the DFPCA approach can largely reduce the false positives.These results show that the performance of DFPCA is superior to that of DF for predicting PPI.It should be noted that feature vectors generated with either DF or DFPCA contain statistical information of amino acids in protein sequences, as well as information about amino acid position and physicochemical properties.

The Performance of the Predictive System Influenced by Randomly Sampling the Noninteracting Protein Subchain Pairs
To investigate the influence of randomly sampling the noninteracting protein subchain pairs, we randomly sampled 2510 noninteracting protein subchain pairs five times to construct five negative sets, and we used the DFPCA approach with hydrophobicity property to predict PPI in the 10CV test.The results, as shown in Table 4, indicate that random sampling of the noninteracting protein subchain pairs in order to construct negative sets has little influence on the performance of the PPI-PKSVM.

Comparison of Different Prediction Methods
To demonstrate the prediction performance of our method, we compared it with other methods [25] on a nonredundant dataset constructed by Pan and Shen [25], in which no protein pair has sequence identity higher than 25%.The number of positive links, i.e., interacting protein pairs, is 3899, which is composed of 2502 proteins, and the number of negative links, i.e., noninteracting protein pairs, is 4262, which is composed of 661 proteins.Among the prediction results of different methods shown in Table 5, the performance of PPI-PKSVM stands out as the best.When compared to Shen's LDA-RF, the accuracy (see Equation ( 15) below) and MCC of LEWP710101/QIAN880138 and Hf-DFPCA are respectively 1.9%, 2%, 0.038 and 0.039 higher.These results indicate that our method is a very promising computational strategy for predicting protein-protein interaction based on the protein sequences.

Dataset
To construct the PPI dataset, we first obtained the subchain pair name of PPIs from the PRISM (Protein Interactions by Structural Matching) server (http://prism.ccbb.ku.edu.tr/prism/), which was used to explore protein interfaces, and we downloaded the corresponding sequences of these protein subchain pairs from the Protein Data Bank (PDB) database (http://www.rcsb.org/pdb/).According to PRISM [43], a subchain pair is defined as an interacting subchain pair if the interface residues of two protein subchains exceed 10; otherwise, the subchain pair is defined as a noninteracting subchain pair.For example, suppose a protein complex has A, B, C and D subchains.If the interface residues of AB, AC, and BD subchain pairs total more than 10, while the interface residues of AD, BC and CD subchain pairs total less than 10, then the AB, AC, and BD subchain pairs are treated as interacting subchain pairs, while the AD, BC and CD subchain pairs are treated as noninteracting subchain pairs.All interacting protein subchain pairs were used in preparing the positive dataset, and all noninteracting subchain pairs were used in preparing the negative dataset.To reduce the redundancy and homology bias for methodology development, all protein subchain pairs were screened according to the following procedures [15].(i) Protein subchain pairs containing a protein subchain with fewer than 50 amino acids were removed; (ii) For subchain pairs having ≥40% sequence identity, only one subchain pair was kept.The ≥40% determinant may be understood as follows.Suppose protein subchain pair A is formed with protein subchains A1 and A2 and protein subchain pair B is formed with protein subchains B1 and B2.If sequence identity between protein subchains A1 and B1 and A2 and B2 is ≥40%, or sequence identity between protein subchains A1 and B2 and between A2 and B1 is ≥40%, then the two protein subchain pairs are defined as having ≥40% sequence identity.In our method, we would only retain those subchain pairs having <40% sequence identity.After these screening procedures, the resultant positive set was comprised of 2510 interacting protein subchain pairs, while the resultant negative set contained many noninteracting protein subchain pairs.To avoid unbalanced data between the positive and negative sets, we randomly sampled the 2510 noninteracting protein subchain pairs to construct the negative set.Finally, a PPI dataset consisting of 2510 PPI subchain pairs and 2510 noninteracting protein subchain pairs was constructed.

Distance Frequency of Amino Acids Grouped with Their Physicochemical Properties
The frequency of the distance between two successive amino acids, or distance frequency, was used to predict subcellular location by Matsuda et al., [44] and can be described as follows: For a protein sequence P, the distance set d A between two successive letters (e.g., A) appearing in protein sequence P can be represented as: { , ,..., ,..., } 1,... 1 where n A is number of letter As appearing in protein sequence P, d i is the distance from the ith letter A to the (i + 1)th letter A, and d i is calculated in a left-to-right fashion.The distance frequency vector for letter A can be defined by the following equation: 1 2

[ , , , , ]
where N j represents the number of times that the jth distance unit appears in the d A set.For example, considering the protein sequence AACDAMMADA, the distance sets of letters A, C, D and M are shown respectively as As a result, the corresponding distance frequency vectors are shown respectively as . The other 16 basic amino acid distance frequency vectors are zero vector, or V = [0,0,0,0,0].Thus, we can use the feature vector x to encode the protein sequence P: In this work, we used the concept of distance frequency [44] and borrowed Dubchak's idea of representing the amino acid sequence with four physicochemical properties [45] to encode the protein subchain sequence.First, according to the amino acid value given by such physicochemical properties as hydrophobicity [46], normalized van der Waals volume [47], polarity [48] and polarizability [49], the 20 natural amino acids can be divided into three groups [45], as listed in the Table 6.For Hydrophobicity, Normalized van der Waals Volume, Polarity and Polarizability, the amino acids in Group 1, Group 2 and Group 3 were expressed as H 1 , H 2 , H 3 ; V 1 , V 2 , V 3 ; P 1 , P 2 , P 3 ; and Z 1 , Z 2 and Z 3 , respectively.Second, each protein subchain sequence was then translated into the appropriate three-symbol sequence, depending on the particular physicochemical property, be it H 1−3 , V 1−3 , P 1−3 , or Z 1−3 .For example, suppose that the original protein sequence is MKEKEFQSKP.Then, by the set of symbols denoted above, in this case, hydrophobicity, this sequence can be translated into and the same would be true for V 1-3 , P 1-3 , or Z 1-3 .Third, the distance frequency of every symbol in the translated sequence was computed.In the above example, the H 1 , H 2 , H 3 distance frequency would be respectively computed for the sequence Finally, every protein subchain sequence can be encoded by the following feature vector: Table 6.Amino acid groups classified according to their physicochemical value.

Physicochemical property Group 1 Group 2 Group 3 Hydrophobicity
Conveniently, the feature set based on hydrophobicity, normalized van der Waals volume, polarity, and polarizability can be written as Hf, Vf, Pf and Zf, respectively.In general, the dimensions of two feature vectors generated separately by two protein subchains are unequal.To solve this issue, we enlarge the feature vector dimension of one protein subchain such that it has a feature vector dimension equal to that of another subchain.For example, given the following protein subchain pair P a − P b : Subchain P a amino acid sequence: MKEKEFQSKP Subchain P b amino acid sequence: QNSLALHKVIMVGSG If we adopt the property of hydrophobicity, then P a and P b amino acid sequences can be translated into the following symbol sequence, respectively.
Then, the distance sets of subchains P a and P b are shown as: and the distance frequency vectors of subchains P a and P b are as follows: Hereinafter we will use "DF" to represent the distance frequency method by grouping amino acids with their physicochemical properties.
By our use of DF to represent the protein subchain pair, we can see that the feature vector is sparse, while the vector dimension is large, when the subchain sequence is longer.To further extract the features, Principal Component Analysis (PCA) was then used to reduce the dimension, and amino acid distance frequency combined with PCA reducing the dimension is now termed DFPCA.).An amino acid index is a set of 20 numerical values representing any of the different physicochemical and biochemical properties of amino acids.We can download these indices from the AAindex database (http://www.genome.jp/aaindex/).For a given protein sequence P whose length is L , we replace each residue in the primary sequence by its amino acid physicochemical value, which results in a numerical sequence 1 2 , , , ,...,

Amino Acid Index Distribution (AAID)
( , ,..., ) . Then, we can define the following feature w i of amino acid i α to represent the protein sequences: Where i f is the frequency of amino acid i α that occurs in protein sequecne P, i I is the physicochemical value of amino acid i α , and the symbol • indicates the simple product.i f and i I are mutually independent.Obviously, w i includes the physicochemical information and statistical information of amino acid i α , but it loses the sequence-order information.Therefore, to let feature vectors contain more sequence-order information, we introduced the 2-order center distance i d by considering the position of amino acid i α , which is defined as , ) where i N α is the total number of amino acid i α appearing in the protein sequence P, Now feature d i contains the physicochemical information, statistical information and the sequence-order information of amino acid i α , but it still does not distinguish the protein pairs in some cases.For example, assume two protein pairs P a -P b and P c -P d .The sequences of protein P a , P b , P c and P d are respectively shown as: P a : MPPRNKPNRR; P b : MPNPRNNKPPGRKTR P c : MPRRNPPNRK; P d : MGTRPPRNNKPNPRK Obviously, P a and P c , as well as P b and P d , have the same w i and d i .If we use the orthogonal sum vector, we cannot distinguish between the P a − P b and P c − P d protein pairs.To solve this problem, the 3-order center distance t i of amino acid i α was introduced, which is defined as , Finally, we can use a combined feature vector to represent protein sequence P by serializing above three features as The protein pair P a -P b can now be represented by the following feature vectors: Generally, vector x ab is not equal to vector x ba .As such, if a query protein pair P a -P b is represented by x ab and x ba respectively, the prediction results may be different.In this paper, we will choose the pairwise kernel function to solve this dilemma.

Pairwise Kernel Function
Ben-Hur and Noble [13] first introduced a tensor product pairwise kernel function K I to measure the similarity between two protein pairs.The comparison between a pair 1 2 ( , ) x x and another pair 3 4 ( , ) x x for K I is done through the comparison of 1 x with 3 x and 2 x with 4 x , on the one hand, and the comparison of 1 x with 4 x and 2 x with 3 x , on the other hand, as However, the K I kernel does not consider differences between the elements of comparison pairs in the feature space; therefore, Vert [50] proposed the following metric learning pairwise kernel K II : In particular, two protein pairs might be very similar for the K II kernel, even if the patterns of the first protein pair are very different from those of the second protein pair, whereas the K I kernel could result in a large dissimilarity between the two protein pairs.It is easy to prove that the K II kernel satisfies both Mercer's condition and the pairwise kernel function condition.In this paper, we use the K II kernel function to predict PPI.
where TP and TN are the number of correctly predicted subchain pairs of interacting proteins and noninteracting proteins, respectively, and FP and FN are the number of incorrectly predicted subchain pairs of noninteracting proteins and interacting proteins, respectively.

Conclusions
In this work, we introduced two feature extraction approaches to represent the protein sequence.One is amino acid distance frequency with PCA reducing the dimension, termed DFPCA.Another is amino acid index distribution based on the physicochemical values of amino acids, termed AAID.The pairwise kernel function SVM was employed as the classifier to predict the PPIs.From the results, we can conclude that (i) the performance of DFPCA is better than that of DF; (ii) the prediction power of PRBF is superior to RBF, suggesting that designing a rational pairwise kernel function is important for predicting PPIs; (iii) DFPCA and AAID with pairwise kernel function SVM are effective and promising approaches for predicting PPIs and may complement existing methods.Since user-friendly and publicly accessible web servers represent the future direction in the development of predictors, we have provided a web server for PPI-PKSVM, and it can be found at (http://159.226.118.31/PPI/index.html).PPI-PKSVM in its present version can be used to evaluate one protein pair.However, we will soon be developing a newer online version able to predict large numbers of PPIs.
represent protein P a and P b , respectively, then the P a -P b protein pair can be acid physicochemical value of the 20 natural amino acids i α (A, C, D, E, F, G, H, I, K, L, M, N, P, Q, R, S, T, V, W, and Y), respectively, which can be accessed through the DBGET/LinkDB system by inputting an amino acid index (e.g., LEWP710101 j position of the amino acid i α in the sequence, and i k − is the mean of the position of amino acid i α .

Table 1 .
Results of DFPCA and AAID with PRBF SVM in 10 CV test.

Table 2 .
Results of RBF and PRBF with DFPCA in the 10 CV test.

Table 3 .
Results of DF and DFPCA with PRBF SVM in the 10 CV test.

Table 4 .
Effect of random sampling of the noninteracting protein subchain pairs on the performance of PPI-PKSVM with DFPCA and PRBF SVM in the 10CV test.