Next Article in Journal
The Neuroprotective Role of Acupuncture and Activation of the BDNF Signaling Pathway
Next Article in Special Issue
PseAAC-General: Fast Building Various Modes of General Form of Chou’s Pseudo-Amino Acid Composition for Large-Scale Protein Datasets
Previous Article in Journal
A New Pepstatin-Insensitive Thermopsin-Like Protease Overproduced in Peptide-Rich Cultures of Sulfolobus solfataricus
Previous Article in Special Issue
iRSpot-TNCPseAAC: Identify Recombination Spots with Trinucleotide Composition and Pseudo Amino Acid Components
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Protein–Protein Interaction with Pairwise Kernel Support Vector Machine

1
College of Automation, Northwestern Polytechnical University, Xi'an 710072, China
2
Key Laboratory of Information Fusion Technology, Ministry of Education, Xi'an 710072, China
*
Author to whom correspondence should be addressed.
Int. J. Mol. Sci. 2014, 15(2), 3220-3233; https://doi.org/10.3390/ijms15023220
Submission received: 1 January 2014 / Revised: 27 January 2014 / Accepted: 29 January 2014 / Published: 21 February 2014
(This article belongs to the Special Issue Molecular Science for Drug Development and Biomedicine)

Abstract

:
Protein–protein interactions (PPIs) play a key role in many cellular processes. Unfortunately, the experimental methods currently used to identify PPIs are both time-consuming and expensive. These obstacles could be overcome by developing computational approaches to predict PPIs. Here, we report two methods of amino acids feature extraction: (i) distance frequency with PCA reducing the dimension (DFPCA) and (ii) amino acid index distribution (AAID) representing the protein sequences. In order to obtain the most robust and reliable results for PPI prediction, pairwise kernel function and support vector machines (SVM) were employed to avoid the concatenation order of two feature vectors generated with two proteins. The highest prediction accuracies of AAID and DFPCA were 94% and 93.96%, respectively, using the 10 CV test, and the results of pairwise radial basis kernel function are considerably improved over those based on radial basis kernel function. Overall, the PPI prediction tool, termed PPI-PKSVM, which is freely available at http://159.226.118.31/PPI/index.html, promises to become useful in such areas as bio-analysis and drug development.

1. Introduction

Protein–protein interactions (PPIs) play an important role in such biological processes as host immune response, the regulation of enzymes, signal transduction and mediating cell adhesion. Understanding PPIs will bring more insight to disease etiology at the molecular level and potentially simplify the discovery of novel drug targets [1]. Information about protein–protein interactions have also been used to address many biological important problems [25], such as prediction of protein function [2], regulatory pathways [3], signal propagation during colorectal cancer progression [4], and identification of colorectal cancer related genes [5]. Experimental methods of identifying PPIs can be roughly categorized into low- and high-throughput methods [6]. However, PPI data obtained from low-throughput methods only cover a small fraction of the complete PPI network, and high-throughput methods often produce a high frequency of false PPI information [7]. Moreover, experimental methods are expensive, time-consuming and labor-intensive. The development of reliable computational methods to facilitate the identification of PPIs could overcome these obstacles.
Thus far, a number of computational approaches have been developed for the large-scale prediction of PPIs based on protein sequence, structure and evolutionary relationship in complete genomes. These methods can be roughly categorized into those that are genomic-based [8,9], structure-based [10], and sequence-based [1126]. Genomic- and structure-based methods cannot be implemented if prior information about the proteins is not available. Sequence-based methods are more universal, but they concatenate the two feature vectors of protein Pa and Pb to represent the protein pair PaPb, and the concatenation order of two feature vectors will affect the prediction results. For example, if we use feature vectors xa, x b to represent protein Pa and Pb, respectively, then the PaPb protein pair can be expressed as xab = xaxb, or xba = xbxa. In general, however, xaxb is not equal to xbxa. Furthermore, PPIs have a symmetrical character; that is, the interaction of protein Pa with protein Pb equals the interaction of protein Pb with protein Pa. Under these circumstances, concatenating two feature vectors of protein Pa and Pb to represent the protein pair PaPb and then using the traditional kernel k(x1, x2) to predict PPIs would not be workable.
Therefore, in this paper, we introduced two kinds of feature extraction approaches, amino acid distance frequency with PCA reducing the dimension (DFPCA) and amino acid index distribution (AAID) to represent the protein sequences, followed by the use of pairwise kernel function and SVM to predict PPI.

2. Results and Discussion

LIBSVM [27], loaded from http://www.csie.ntu.edu.tw/~cjlin, is a library for Support Vector Machines (SVMs), and it was used to design the classifier in this paper. The kernel program of the software was modified to the pairwise kernel functions, which were formed by the RBF genomic kernel function K (x1, x2) in all experiments.

2.1. The Results of DFPCA and AADI with KII Pairwise Kernel Function SVM

In statistical prediction, the following three cross-validation methods are often used to examine a predictor for its effectiveness in practical application: independent dataset test, K-fold crossover or subsampling test, and jackknife test [28]. However, of the three test methods, the jackknife test is deemed the least arbitrary that can always yield a unique result for a given benchmark dataset as demonstrated by Equations (28)–(30) in [29]. Accordingly, the jackknife test has been increasingly and widely used by investigators to examine the quality of various predictors (see, e.g., [3041]). However, to reduce the computational time, we adopted the 10-fold cross-validation (10 CV) test in this study as done by many investigators with SVM as the prediction engine.
The four feature vector sets, Hf, Vf, Pf, and Zf, extracted with DFPCA and the five feature vector sets, LEWP710101, QIAN880138, NADH010104, NAGK730103 and AURR980116, extracted with AAID were employed as the input feature vectors for KII pairwise radial basis kernel function (PRBF) SVM. The results of DFPCA and AAID are summarized in Table 1.
From Table 1, we can see that the performances of the two feature extraction approaches, i.e., amino acid distance frequency with PCA (DFPCA) and amino acid index distribution (AAID), are nearly equal when using the KII pairwise kernel SVM. The total prediction accuracies are 93.69%~94%. As previously noted, we used just five amino acid indices, including LEWP710101, QIAN880138, NADH010104, NAGK730103 and AURR980116, to produce the feature vector sets. When we tested the performance of AAID against the remaining 480 amino acid indices from AAindex, we found that the amino acid index does affect predictive results and that the total prediction accuracies of those amino acid indices were 79.4%~94%. Among our original five indices, as noted above, the performance of AAID was superior in comparison to the results from AAindex. To account for the better performance of our five indices, we point to the physicochemical and biochemical properties of amino acids. By single-linkage clustering, one of agglomerative hierarchical clustering methods, Tomii and Kanehisa [42] divided the minimum spanning of these amino acid indices into six regions: α and turn propensities, β propensity, amino acid composition, hydrophobicity, physicochemical properties, and other properties. The indices of LEWP710101, QIAN880138, NAGK730103 and AURR980116 are arranged into the region of α and turn propensities, while NADH010104 is arranged into the hydrophobicity region, indicating that the properties of α and turn propensities, and hydrophobicity contain more distinguishable information for predicting PPIs.

2.2. The Comparison of Pairwise Kernel Function with Traditional Kernel Function

In order to evaluate the performance of pairwise kernel function, we compared the results of pairwise radial basis kernel function (PRBF) and radial basis function kernel (RBF) with the same feature vector sets. For RBF, we concatenate the two feature vectors of protein Pa and protein Pb to represent the protein pair Pa – Pb; that is, feature vector xab = xaxb was used as the input feature vector of RBF. The results of RBF and PRBF with DFPCA in the 10CV test are listed in Table 2.
Table 2 shows that the performance of PRBF is superior to that of RBF for predicting PPI. The total prediction accuracies of PRBF are higher at 3.9%~4.48% than those of RBF.

2.3. The Comparison of DF and DFPCA Feature Extraction Approaches

For the feature extraction approach of distance frequency of amino acids grouped with their physicochemical properties, we compared the results of DF and DFPCA with PRBF SVM to test the validity of adopting PCA. The reduced feature matrix is set to retain 99.9% information of the original feature matrix by PCA. The results of DF and DFPCA with PRBF SVM in the 10CV test are listed in Table 3.
From Table 3, we can see that the performance of DFPCA is superior to that of DF. The total prediction accuracies and MCC (see Equation (16) below) of DFPCA are 15.79%~24.43% and 0.2705~0.4067 higher than those of DF, respectively. Although the sensitivities of DF are a little higher (1.43%~1.59%) than those of DFPCA for the Hf, Vf, Pf and Zf feature sets, the positive predictive values are much less than that of DFPCA (21%~29%), which means that the DFPCA approach can largely reduce the false positives. These results show that the performance of DFPCA is superior to that of DF for predicting PPI. It should be noted that feature vectors generated with either DF or DFPCA contain statistical information of amino acids in protein sequences, as well as information about amino acid position and physicochemical properties.

2.4. The Performance of the Predictive System Influenced by Randomly Sampling the Noninteracting Protein Subchain Pairs

To investigate the influence of randomly sampling the noninteracting protein subchain pairs, we randomly sampled 2510 noninteracting protein subchain pairs five times to construct five negative sets, and we used the DFPCA approach with hydrophobicity property to predict PPI in the 10CV test. The results, as shown in Table 4, indicate that random sampling of the noninteracting protein subchain pairs in order to construct negative sets has little influence on the performance of the PPI-PKSVM.

2.5. Comparison of Different Prediction Methods

To demonstrate the prediction performance of our method, we compared it with other methods [25] on a nonredundant dataset constructed by Pan and Shen [25], in which no protein pair has sequence identity higher than 25%. The number of positive links, i.e., interacting protein pairs, is 3899, which is composed of 2502 proteins, and the number of negative links, i.e., noninteracting protein pairs, is 4262, which is composed of 661 proteins. Among the prediction results of different methods shown in Table 5, the performance of PPI-PKSVM stands out as the best. When compared to Shen’s LDA-RF, the accuracy (see Equation (15) below) and MCC of LEWP710101/QIAN880138 and Hf-DFPCA are respectively 1.9%, 2%, 0.038 and 0.039 higher. These results indicate that our method is a very promising computational strategy for predicting protein–protein interaction based on the protein sequences.

3. Experimental Section

3.1. Dataset

To construct the PPI dataset, we first obtained the subchain pair name of PPIs from the PRISM (Protein Interactions by Structural Matching) server ( http://prism.ccbb.ku.edu.tr/prism/), which was used to explore protein interfaces, and we downloaded the corresponding sequences of these protein subchain pairs from the Protein Data Bank (PDB) database ( http://www.rcsb.org/pdb/). According to PRISM [43], a subchain pair is defined as an interacting subchain pair if the interface residues of two protein subchains exceed 10; otherwise, the subchain pair is defined as a noninteracting subchain pair. For example, suppose a protein complex has A, B, C and D subchains. If the interface residues of AB, AC, and BD subchain pairs total more than 10, while the interface residues of AD, BC and CD subchain pairs total less than 10, then the AB, AC, and BD subchain pairs are treated as interacting subchain pairs, while the AD, BC and CD subchain pairs are treated as noninteracting subchain pairs. All interacting protein subchain pairs were used in preparing the positive dataset, and all noninteracting subchain pairs were used in preparing the negative dataset. To reduce the redundancy and homology bias for methodology development, all protein subchain pairs were screened according to the following procedures [15]. (i) Protein subchain pairs containing a protein subchain with fewer than 50 amino acids were removed; (ii) For subchain pairs having ≥40% sequence identity, only one subchain pair was kept. The ≥40% determinant may be understood as follows. Suppose protein subchain pair A is formed with protein subchains A1 and A2 and protein subchain pair B is formed with protein subchains B1 and B2. If sequence identity between protein subchains A1 and B1 and A2 and B2 is ≥40%, or sequence identity between protein subchains A1 and B2 and between A2 and B1 is ≥40%, then the two protein subchain pairs are defined as having ≥40% sequence identity. In our method, we would only retain those subchain pairs having <40% sequence identity. After these screening procedures, the resultant positive set was comprised of 2510 interacting protein subchain pairs, while the resultant negative set contained many noninteracting protein subchain pairs. To avoid unbalanced data between the positive and negative sets, we randomly sampled the 2510 noninteracting protein subchain pairs to construct the negative set. Finally, a PPI dataset consisting of 2510 PPI subchain pairs and 2510 noninteracting protein subchain pairs was constructed.

3.2. Distance Frequency of Amino Acids Grouped with Their Physicochemical Properties

The frequency of the distance between two successive amino acids, or distance frequency, was used to predict subcellular location by Matsuda et al., [44] and can be described as follows: For a protein sequence P, the distance set dA between two successive letters (e.g., A) appearing in protein sequence P can be represented as:
d A = { d 1 , d 2 , , d i , d n A - 1 } i = 1 , n A - 1
where nA is number of letter As appearing in protein sequence P, di is the distance from the ith letter A to the (i + 1)th letter A, and di is calculated in a left-to-right fashion. The distance frequency vector for letter A can be defined by the following equation:
f A = [ N 1 , N 2 , , N j , N m ]
where Nj represents the number of times that the jth distance unit appears in the dA set. For example, considering the protein sequence AACDAMMADA, the distance sets of letters A, C, D and M are shown respectively as
d A = { 1 , 3 , 3 , 2 } , d C = { 0 } , d D = { 5 } , d M = { 1 }
As a result, the corresponding distance frequency vectors are shown respectively as DfA = [1,1,2,0,0], DfC = [0,0,0,0,0], DfD = [0,0,0,0,1], DfM = [1,0,0,0,0]. The other 16 basic amino acid distance frequency vectors are zero vector, or V = [0,0,0,0,0]. Thus, we can use the feature vector x to encode the protein sequence P:
x = [ D f A , D f C , D f D , , D f Y ]
In this work, we used the concept of distance frequency [44] and borrowed Dubchak’s idea of representing the amino acid sequence with four physicochemical properties [45] to encode the protein subchain sequence. First, according to the amino acid value given by such physicochemical properties as hydrophobicity [46], normalized van der Waals volume [47], polarity [48] and polarizability [49], the 20 natural amino acids can be divided into three groups [45], as listed in the Table 6. For Hydrophobicity, Normalized van der Waals Volume, Polarity and Polarizability, the amino acids in Group 1, Group 2 and Group 3 were expressed as H1, H2, H3; V1, V2, V3; P1, P2, P3; and Z1, Z2 and Z3, respectively. Second, each protein subchain sequence was then translated into the appropriate three-symbol sequence, depending on the particular physicochemical property, be it H1−3, V1−3, P1−3, or Z1−3. For example, suppose that the original protein sequence is MKEKEFQSKP. Then, by the set of symbols denoted above, in this case, hydrophobicity, this sequence can be translated into H3H1H1H1H1H3H1H2H1H2, and the same would be true for V1–3, P1–3, or Z1–3. Third, the distance frequency of every symbol in the translated sequence was computed. In the above example, the H1, H2, H3 distance frequency would be respectively computed for the sequence H3H1H1H1H1H3H1H2H1H2. Finally, every protein subchain sequence can be encoded by the following feature vector:
x H = [ x H 1 , x H 2 , x H 3 ] T , x V = [ x V 1 , x V 2 , x V 3 ] T , x P = [ x P 1 , x P 2 , x P 3 ] T , x Z = [ x Z 1 , x Z 2 , x Z 3 ] T
Conveniently, the feature set based on hydrophobicity, normalized van der Waals volume, polarity, and polarizability can be written as Hf, Vf, Pf and Zf, respectively. In general, the dimensions of two feature vectors generated separately by two protein subchains are unequal. To solve this issue, we enlarge the feature vector dimension of one protein subchain such that it has a feature vector dimension equal to that of another subchain. For example, given the following protein subchain pair PaPb:
  • Subchain Pa amino acid sequence: MKEKEFQSKP
  • Subchain Pb amino acid sequence: QNSLALHKVIMVGSG
If we adopt the property of hydrophobicity, then Pa and Pb amino acid sequences can be translated into the following symbol sequence, respectively.
  • Subchain Pa: H3H1H1H1H1H3H1H2H1H2
  • Subchain Pb: H1H1H2H3H2H3H2H1H3H3H3H3H2H2H2
Then, the distance sets of subchains Pa and Pb are shown as: d H 1 a = { 1 , 1 , 1 , 2 , 2 } , d H 2 a = { 2 } , d H 3 a = { 5 } , d H 1 b = { 1 , 6 } , d H 2 b = { 2 , 2 , 6 , 1 , 1 } , d H 3 b = { 2 , 3 , 1 , 1 , 1 , }, and the distance frequency vectors of subchains Pa and Pb are as follows:
x a = [ x H 1 a , x H 2 a , x H 3 a ] , x b = [ x H 1 b , x H 2 b , x H 3 b ]
where
x H 1 a = [ 3 , 2 , 0 , 0 , 0 , 0 ] , x H 2 a = [ 0 , 1 , 0 , 0 , 0 , 0 ] , x H 3 a = [ 0 , 0 , 0 , 0 , 1 , 0 ] , x H 1 b = [ 1 , 0 , 0 , 0 , 0 , 1 ] , x H 2 b = [ 2 , 2 , 0 , 0 , 0 , 1 ] , x H 3 b = [ 3 , 1 , 1 , 0 , 0 , 0 ]
Hereinafter we will use “DF” to represent the distance frequency method by grouping amino acids with their physicochemical properties.
By our use of DF to represent the protein subchain pair, we can see that the feature vector is sparse, while the vector dimension is large, when the subchain sequence is longer. To further extract the features, Principal Component Analysis (PCA) was then used to reduce the dimension, and amino acid distance frequency combined with PCA reducing the dimension is now termed DFPCA.

3.3. Amino Acid Index Distribution (AAID)

Let I1, I2, …, Ii, ···, I20 be the amino acid physicochemical value of the 20 natural amino acids αi (A, C, D, E, F, G, H, I, K, L, M, N, P, Q, R, S, T, V, W, and Y), respectively, which can be accessed through the DBGET/LinkDB system by inputting an amino acid index (e.g., LEWP710101). An amino acid index is a set of 20 numerical values representing any of the different physicochemical and biochemical properties of amino acids. We can download these indices from the AAindex database ( http://www.genome.jp/aaindex/).
For a given protein sequence P whose length is L, we replace each residue in the primary sequence by its amino acid physicochemical value, which results in a numerical sequence h1, h2, …, hl, …, hL, (hlI1, I2,…, I20).
Then, we can define the following feature wi of amino acid αi to represent the protein sequences:
w i = I i f i
Where fi is the frequency of amino acid αi that occurs in protein sequecne P, Ii is the physicochemical value of amino acid αi, and the symbol • indicates the simple product. fi and Ii are mutually independent. Obviously, wi includes the physicochemical information and statistical information of amino acid αi, but it loses the sequence-order information. Therefore, to let feature vectors contain more sequence-order information, we introduced the 2-order center distance di by considering the position of amino acid αi, which is defined as
d i = j = 1 N α i ( k i , j - k ¯ i L I i ) 2
where Nαi is the total number of amino acid αi appearing in the protein sequence P, ki, j (j = 1,2, ···, Nαi) is the jth position of the amino acid αi in the sequence, and i is the mean of the position of amino acid αi.
Now feature di contains the physicochemical information, statistical information and the sequence-order information of amino acid αi, but it still does not distinguish the protein pairs in some cases. For example, assume two protein pairs PaPb and PcPd. The sequences of protein Pa, Pb, Pc and Pd are respectively shown as:
  • Pa: MPPRNKPNRR; Pb: MPNPRNNKPPGRKTR
  • Pc: MPRRNPPNRK; Pd: MGTRPPRNNKPNPRK
Obviously, Pa and Pc, as well as Pb and Pd, have the same wi and di. If we use the orthogonal sum vector, we cannot distinguish between the PaPb and PcPd protein pairs. To solve this problem, the 3-order center distance ti of amino acid αi was introduced, which is defined as
t i = j = 1 N α i ( k i , j - k ¯ i L I i ) 3
Finally, we can use a combined feature vector to represent protein sequence P by serializing above three features as
x = [ w 1 , , w i , , w 20 , d 1 , , d i , , d 20 , t 1 , , t i , , t 20 ] T
The protein pair PaPb can now be represented by the following feature vectors:
x a b = [ w 1 a , , w 20 a , d 1 a , d 20 a , t 1 a , , t 20 a , w 1 b , , w 20 b , d 1 b , , d 20 b , t 1 b , , t 20 b ] T
or
x b a = [ w 1 b , , w 20 b , d 1 b , d 20 b , t 1 b , , t 20 b , w 1 a , , w 20 a , d 1 a , , d 20 a , t 1 a , , t 20 a ] T
Generally, vector xab is not equal to vector xba. As such, if a query protein pair Pa – Pb is represented by xab and xba respectively, the prediction results may be different. In this paper, we will choose the pairwise kernel function to solve this dilemma.

3.4. Pairwise Kernel Function

Ben-Hur and Noble [13] first introduced a tensor product pairwise kernel function KI to measure the similarity between two protein pairs. The comparison between a pair (x1, x2) and another pair (x3, x4) for KI is done through the comparison of x1 with x3 and x2 with x4, on the one hand, and the comparison of x1 with x4 and x2 with x3, on the other hand, as
K I ( ( x 1 , x 2 ) , ( x 3 , x 4 ) ) = K ( x 1 , x 3 ) K ( x 2 , x 4 ) + K ( x 1 , x 4 ) K ( x 2 , x 3 )
However, the KI kernel does not consider differences between the elements of comparison pairs in the feature space; therefore, Vert [50] proposed the following metric learning pairwise kernel KII:
K II ( ( x 1 , x 2 ) , ( x 3 , x 4 ) ) = ( K ( x 1 , x 3 ) + K ( x 2 , x 4 ) - K ( x 1 , x 4 ) - K ( x 2 , x 3 ) ) 2
In particular, two protein pairs might be very similar for the KII kernel, even if the patterns of the first protein pair are very different from those of the second protein pair, whereas the KI kernel could result in a large dissimilarity between the two protein pairs. It is easy to prove that the KII kernel satisfies both Mercer’s condition and the pairwise kernel function condition. In this paper, we use the KII kernel function to predict PPI.

3.5. Assessment of Prediction System

Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and total prediction accuracy (ACC) [3941] were employed to measure the performance of PPI-PKSVM.
S n = T P T P + F N
S p = T N T N + F P
P P V = T P T P + F P
A C C = T P + T N T P + T N + F P + F N
M C C = T P × T N - F P × F N ( T P + F N ) ( T P + F P ) ( T N + F N ) ( T N + F P )
where TP and TN are the number of correctly predicted subchain pairs of interacting proteins and noninteracting proteins, respectively, and FP and FN are the number of incorrectly predicted subchain pairs of noninteracting proteins and interacting proteins, respectively.

4. Conclusions

In this work, we introduced two feature extraction approaches to represent the protein sequence. One is amino acid distance frequency with PCA reducing the dimension, termed DFPCA. Another is amino acid index distribution based on the physicochemical values of amino acids, termed AAID. The pairwise kernel function SVM was employed as the classifier to predict the PPIs. From the results, we can conclude that (i) the performance of DFPCA is better than that of DF; (ii) the prediction power of PRBF is superior to RBF, suggesting that designing a rational pairwise kernel function is important for predicting PPIs; (iii) DFPCA and AAID with pairwise kernel function SVM are effective and promising approaches for predicting PPIs and may complement existing methods. Since user-friendly and publicly accessible web servers represent the future direction in the development of predictors, we have provided a web server for PPI-PKSVM, and it can be found at ( http://159.226.118.31/PPI/index.html). PPI-PKSVM in its present version can be used to evaluate one protein pair. However, we will soon be developing a newer online version able to predict large numbers of PPIs.

Acknowledgments

This paper was supported by the National Natural Science Foundation of China (No. 61170134 and 60775012).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lucy, S.; Harpreet, K.S.; Gary, D.B.; Anton, J.E. Computational prediction of protein–protein interactions. Mol. Biotechnol 2008, 38, 1–17. [Google Scholar]
  2. Hu, L.; Huang, T.; Shi, X.; Lu, W.C.; Cai, Y.D.; Chou, K.C. Predicting functions of proteins in mouse based on weighted protein–protein interaction network and protein hybrid properties. PLoS One 2011, 6, e14556. [Google Scholar]
  3. Huang, T.; Chen, L.; Cai, Y.D.; Chou, K.C. Classification and analysis of regulatory pathways using graph property, biochemical and physicochemical property, and functional property. PLoS One 2011, 6, e25297. [Google Scholar]
  4. Jiang, Y.; Huang, T.; Chen, L.; Gao, Y.F.; Cai, Y.D.; Chou, K.C. Signal propagation in protein interaction network during colorectal cancer progression. BioMed Res. Int 2013, 2013. [Google Scholar] [CrossRef]
  5. Li, B.Q.; Huang, T.; Cai, Y.D.; Chou, K.C. Identification of colorectal cancer related genes with mRMR and shortest path in protein–protein interaction network. PLoS One 2013, 7, e33393. [Google Scholar]
  6. Shoemaker, B.A.; Panchenko, A.R. Deciphering protein–protein interactions. Part I Experimental techniques and databases. PLoS Comput. Biol 2007, 3, e42. [Google Scholar]
  7. Han, J.D.; Dupuy, D.; Bertin, N.; Cusick, M.E.; Vidal, M. Effect of sampling on topology predictions of protein–protein interaction networks. Nat. Biotechnol 2005, 23, 839–844. [Google Scholar]
  8. Marcotte, E.M.; Pellegrini, M.; Ng, H.L.; Rice, D.W.; Yeates, T.O.; Eisenberg, D. Detecting protein function and protein–protein interactions from genome sequences. Science 1999, 285, 751–753. [Google Scholar]
  9. Juan, D.; Pazos, F.; Valencia, A. High-confidence prediction of global interactomes based on genome-wide coevolutionary networks. Proc. Natl. Acad. Sci. USA 2008, 105, 934–939. [Google Scholar]
  10. Singhal, M.; Resat, H. A domain-based approach to predict proteinprotein interactions. BMC Bioinforma 2007, 8, 199. [Google Scholar]
  11. Bock, J.R.; Gough, D.A. Predicting protein–protein interactions from primary structure. Bioinformatics 2001, 17, 455–460. [Google Scholar]
  12. Gomez, S.M.; Noble, A.S.; Rzhetsky, A. Learning to predict protein–protein interactions from protein sequences. Bioinformatics 2003, 19, 1875–1881. [Google Scholar]
  13. Ben-Hur, A.; Noble, W.S. Kernel methods for predicting protein–protein interactions. Bioinformatics 2005, 21, i38–i46. [Google Scholar]
  14. Martin, S.; Roe, D.; Faulon, J.L. Predicting protein–protein interactions using signature products. Bioinformatics 2005, 21, 218–226. [Google Scholar]
  15. Chou, K.C.; Cai, Y.D. Predicting protein–protein interactions from sequences in a hybridization space. J. Proteome Res 2006, 5, 316–322. [Google Scholar]
  16. Nanni, L.; Lumini, A. An ensemble of K-local hyperplanes for predicting protein–protein interactions. Bioinformatics 2006, 22, 1207–1210. [Google Scholar]
  17. Pitre, S.; Dehne, F.; Chan, A.; Cheetham, J.; Duong, A.; Emili, A.; Gebbia, M.; Greenblatt, J.; Jessulat, M.; Krogan, N.; et al. PIPE: A protein–protein interaction prediction engine based on the re-occurring short polypeptide sequences between known interacting protein pairs. BMC Bioinforma 2006, 7, 365. [Google Scholar]
  18. Li, X.L.; Tan, S.H.; Ng, S.K. Improving domain-based protein interaction prediction using biologically-significant negative dataset. Int. J. Data Min. Bioinforma 2006, 1, 138–149. [Google Scholar]
  19. Shen, J.W.; Zhang, J.; Luo, X.M.; Zhu, W.L.; Yu, K.Q.; Chen, K.X.; Li, Y.X.; Jiang, H.L. Predicting protein–protein interactions based only on sequences information. Proc. Natl. Acad. Sci. USA 2007, 104, 4337–4341. [Google Scholar]
  20. Guo, Y.Z.; Yu, L.Z.; Wen, Z.N.; Li, M.L. Using support vector machine combined with auto covariance to predict protein–protein interactions from protein sequences. Nucleic Acids Res 2008, 36, 3025–3030. [Google Scholar]
  21. Chen, X.W.; Han, B.; Fang, J.; Haasl, R.J. Large-scale protein–protein interaction prediction using novel kernel methods. Int. J. Data Min. Bioinforma 2008, 2, 145–156. [Google Scholar]
  22. Chen, W.; Zhang, S.W.; Cheng, Y.M.; Pan, Q. Prediction of protein–protein interaction types using the decision templates based on multiple classier fusion. Math. Comput. Model 2010, 52, 2075–2084. [Google Scholar]
  23. Guo, Y.; Li, M.; Pu, X.; Li, G.; Guang, X.; Xiong, W.; Li, J. PRED_PPI: A server for predicting protein–protein interactions based on sequence data with probability assignment. BMC Res. Notes 2010, 3, 145. [Google Scholar]
  24. Yu, C.Y.; Chou, L.C.; Chang, D.T.H. Predicting protein–protein interactions in unbalanced data using the primary structure of proteins. BMC Bioinforma 2010, 11, 167. [Google Scholar]
  25. Pan, X.Y.; Zhang, Y.N.; Shen, H.B. Large-scale prediction of human protein–protein interactions from amino acid sequence based on latent topic features. J. Proteome Res 2010, 9, 4992–5001. [Google Scholar]
  26. Liu, C.H.; Li, K.C.; Yuan, S. Human protein–protein interaction prediction by a novel sequence-based co-evolution method: Co-evolutionary divergence. Bioinformatics 2013, 29, 92–98. [Google Scholar]
  27. Hsu, C.; Lin, C.J. A comparision of methods for multi-class support vector machines. IEEE Trans. Neural Netw 2002, 3, 415–425. [Google Scholar]
  28. Chou, K.C.; Zhang, C.T. Review: Prediction of protein structural classes. Crit. Rev. Biochem. Mol. Biol 1995, 30, 275–349. [Google Scholar]
  29. Chou, K.C. Some remarks on protein attribute prediction and pseudo amino acid composition (50th Anniversary Year Review). J. Theor. Biol 2011, 273, 236–247. [Google Scholar]
  30. Esmaeili, M.; Mohabatkar, H.; Mohsenzadeh, S. Using the concept of Chou’s pseudo amino acid composition for risk type prediction of human papillomaviruses. J. Theor. Biol 2010, 263, 203–209. [Google Scholar]
  31. Hajisharifi, Z.; Piryaiee, M.; Mohammad Beigi, B.; Mandana, B.; Hassan, M. Predicting anticancer peptides with Chou’s pseudo amino acid composition and investigating their mutagenicity via Ames test. J. Theor. Biol 2014, 341, 34–40. [Google Scholar]
  32. Mohabatkar, H.; Mohammad Beigi, M.; Esmaeili, A. Prediction of GABA(A) receptor proteins using the concept of Chou’s pseudo-amino acid composition and support vector machine. J. Theor. Biol 2011, 281, 18–23. [Google Scholar]
  33. Xu, Y.; Ding, J.; Wu, L.Y.; Chou, K.C. iSNO-PseAAC: Predict cysteine S-nitrosylation sites in proteins by incorporating position specific amino acid propensity into pseudo amino acid composition. PLoS One 2013, 8, e55844. [Google Scholar]
  34. Xu, Y.; Shao, X.J.; Wu, L.Y.; Deng, N.Y.; Chou, K.C. iSNO-AAPair: Incorporating amino acid pairwise coupling into PseAAC for predicting cysteine S-nitrosylation sites in proteins. Peer J 2013, 1, e171. [Google Scholar]
  35. Chen, W.; Feng, P.M.; Lin, H.; chou, K.C. iRSpot-PseDNC: Identify recombination spots with pseudo dinucleotide composition. Nucleic Acids Res 2013, 41, e69. [Google Scholar]
  36. Qiu, W.R.; Xiao, X.; Chou, K.C. iRSpot-TNCPseAAC: Identify recombination spots with trinucleotide composition and pseudo amino acid components. Int. J. Mol. Sci 2014, 15, 1746–1766. [Google Scholar]
  37. Min, J.L.; Xiao, X.; Chou, K.C. iEzy-Drug: A web server for identifying the interaction between enzymes and drugs in cellular networking. Biomed. Res. Int 2013, 2013, 701317. [Google Scholar]
  38. Zhang, S.W.; Liu, Y.F.; Yu, Y.; Zhang, T.H.; Fan, X.N. MSLoc-DT: A new method for predicting the protein subcellular location of multispecies based on decision templates. Anal. Biochem 2014, 449, 164–171. [Google Scholar]
  39. Chen, W.; Zhang, S.W.; Cheng, Y.M.; Pan, Q. Identification of protein-RNA interaction sites using the information of spatial adjacent residues. Proteome Sci 2011, 9, S16. [Google Scholar]
  40. Zhang, S.W.; Zhang, Y.L.; Yang, H.F.; Zhao, C.H.; Pan, Q. Using the concept of Chou’s pseudo amino acid composition to predict protein subcellular localization: An approach by incorporating evolutionary information and von Neumann entropies. Amino Acids 2008, 34, 565–572. [Google Scholar]
  41. Zhang, S.W.; Chen, W.; Yang, F.; Pan, Q. Using Chou’s pseudo amino acid composition to predict protein quaternary structure: A sequence-segmented PseAAC approach. Amino Acids 2008, 35, 591–598. [Google Scholar]
  42. Tomii, K.; Kanehisa, M. Analysis of amino acid indices and mutation matrices for sequence comparison and structure prediction of proteins. Protein Eng 1996, 9, 27–36. [Google Scholar]
  43. Ogmen, U.; Keskin, O.; Aytuna, A.S.; Nussinov, R.; Gürsoy, A. PRISM: Protein interactions by structural matching. Nucleic Acids Res 2005, 33, 331–336. [Google Scholar]
  44. Matsuda, S.; Vert, J.P.; Saigo, H.; Ueda, N.; Toh, H.; Akutsu, T. A novel representation of protein sequences for prediction of subcellular location using support vector machines. Protein Sci 2005, 14, 2804–2813. [Google Scholar]
  45. Dubchak, I.; Muchnik, I.; Mayor, C.; Dralyuk, I.; Kim, S.H. Recognition of a protein fold in the context of the SCOP classification. Proteins 1999, 35, 401–407. [Google Scholar]
  46. Chothia, C.; Finkelstein, A.V. The classification and origins of protein folding patterns. Annu. Rev. Biochem 1999, 59, 1007–1039. [Google Scholar]
  47. Fauchere, J.L.; Charton, M.; Kier, L.B.; Verloop, A.; Pliska, V. Amino acid side chain parameters for correlation studies in biology and pharmacology. Int. J. Peptide Protein Res 1998, 32, 269–278. [Google Scholar]
  48. Grantham, R. Amino acid difference formula to help explain protein evolution. Science 1974, 185, 862–864. [Google Scholar]
  49. Charton, M.; Charton, B.I. The structural dependence of amino acid hydrophobicity parameters. J. Theor. Biol 1982, 99, 629–644. [Google Scholar]
  50. Vert, J.P.; Qiu, J.; Noble, W.S. A new pairwise kernel for biological network inference with support vector machines. BMC Bioinforma 2007, 8, S8. [Google Scholar]
Table 1. Results of DFPCA and AAID with PRBF SVM in 10 CV test.
Table 1. Results of DFPCA and AAID with PRBF SVM in 10 CV test.
Feature SetSn (%)PPV (%)ACC (%)MCC
Hf95.94 ± 1.9291.98 ± 2.8893.78 ± 1.440.8765
Vf95.66 ± 2.7592.52 ± 2.4093.96 ± 1.860.8798
Pf95.78 ± 2.2392.07 ± 1.6993.76 ± 1.930.8760
Zf96.06 ± 1.2491.71 ± 3.1393.69 ± 1.860.8747
LEWP71010195.86 ± 2.2392.08 ± 4.3293.80 ± 2.420.8768
QIAN88013896.06 ± 2.8392.27 ± 1.5094.00 ± 1.220.8808
NADH01010495.82 ± 2.9892.04 ± 2.5193.76 ± 1.660.8760
NAGK73010396.06 ± 2.8392.09 ± 4.0293.90 ± 3.310.8789
AURR98011695.94 ± 2.0792.33 ± 1.4293.98 ± 1.240.8804
Table 2. Results of RBF and PRBF with DFPCA in the 10 CV test.
Table 2. Results of RBF and PRBF with DFPCA in the 10 CV test.
Feature SetKernel FunctionSn (%)PPV (%)ACC (%)
HfRBF89.96 ± 0.5289.65 ± 2.1789.88 ± 1.05
PRBF95.94 ± 1.9291.98 ± 2.8893.78 ± 1.44
VfRBF90.20 ± 1.3189.33 ± 2.6089.72 ± 1.72
PRBF95.66 ± 2.7592.52 ± 2.4093.96 ± 1.86
PfRBF89.32 ± 0.8689.26 ± 2.9189.28 ± 1.44
PRBF95.78 ± 2.2392.07 ± 1.6993.76 ± 1.93
ZfRBF90.84 ± 1.8588.79 ± 2.5089.64 ± 1.18
PRBF96.06 ± 1.2491.71 ± 3.1393.69 ± 1.86
Table 3. Results of DF and DFPCA with PRBF SVM in the 10 CV test.
Table 3. Results of DF and DFPCA with PRBF SVM in the 10 CV test.
Feature SetFeature Extraction ApproachSn (%)PPV (%)ACC (%)MCC
HfDF97.37 ± 2.5566.67 ± 27.874.34 ± 24.30.5485
DFPCA95.94 ± 1.9291.98 ± 2.8893.78 ± 1.440.8765
VfDF97.21 ± 2.3971.40 ± 23.078.17 ± 27.10.6093
DFPCA95.66 ± 2.7592.52 ± 2.4093.96 ± 1.860.8798
PfDF97.13 ± 4.7069.48 ± 25.577.23 ± 27.20.5937
DFPCA95.78 ± 2.2392.07 ± 1.6993.76 ± 1.930.8760
ZfDF97.65 ± 4.8262.29 ± 29.569.26 ± 23.60.4680
DFPCA96.06 ± 1.2491.71 ± 3.1393.69 ± 1.860.8747
Table 4. Effect of random sampling of the noninteracting protein subchain pairs on the performance of PPI-PKSVM with DFPCA and PRBF SVM in the 10CV test.
Table 4. Effect of random sampling of the noninteracting protein subchain pairs on the performance of PPI-PKSVM with DFPCA and PRBF SVM in the 10CV test.
Sampling TimeSn (%)PPV (%)AAC (%)MCC
195.38 ± 3.3591.20 ± 3.3793.09 ± 3.450.8627
295.42 ± 1.3991.52 ± 3.2493.29 ± 1.650.8665
395.46 ± 3.0391.21 ± 1.6393.13 ± 2.290.8635
495.46 ± 3.0391.49 ± 1.7093.29 ± 2.130.8666
595.94 ± 1.9291.98 ± 2.8893.78 ± 1.440.8765
Table 5. Performance comparison of different PPI methods using Shen’s dataset a in the 10 CV test.
Table 5. Performance comparison of different PPI methods using Shen’s dataset a in the 10 CV test.
MethodSn (%)Sp (%)ACC (%)MCC
LEWP71010197.3 ± 0.0499.2 ± 0.0498.3 ± 0.000.966 ± 0.0006
QIAN88013897.3 ± 0.1099.1 ± 0.1098.3 ± 0.100.966 ± 0.002
NADH01010497.2 ± 0.0799.2 ± 0.0498.3 ± 0.050.965 ± 0.0007
NAGK73010397.2 ± 0.0699.2 ± 0.0498.2 ± 0.060.965 ± 0.0004
AURR98011697.3 ± 0.0499.1 ± 0.0698.2 ± 0.060.965 ± 0.0006
Hf-DFPCA97.6 ± 0.2099.1 ± 0.1098.4 ± 0.100.967 ± 0.002
Vf-DFPCA97.5 ± 0.1098.9 ± 1.0098.3 ± 0.800.965 ± 0.007
Pf-DFPCA96.9 ± 0.1099.5 ± 0.6098.2 ± 0.600.964 ± 0.004
Zf-DFPCA97.9 ± 0.9096.0 ± 0.2096.9 ± 1.100.939 ± 0.002
LDA-RF b94.2 ± 0.4098.0 ± 0.3096.4 ± 0.300.928 ± 0.006
LDA-RoF b93.7± 0.5097.6 ± 0.6095.7 ± 0.400.918 ± 0.007
LDA-SVM b89.7 ± 1.3091.5 ± 1.1090.7 ± 0.900.813 ± 0.018
AC-RF b94.0 ± 0.6096.6 ± 0.4095.5 ± 0.300.914 ± 0.007
AC-RoF b93.3 ± 0.7097.1 ± 0.7095.1 ± 0.600.910 ± 0.009
AC-SVM b94.0 ± 0.6084.9 ± 1.7089.3 ± 0.800.792 ± 0.014
PseAAC-RF b94.1 ± 0.9096.9 ± 0.3095.6 ± 0.400.912 ± 0.007
PseAAC-RoF b93.6 ± 0.9096.7 ± 0.4095.3 ± 0.500.907 ± 0.009
PseAAC-SVM b89.9 ± 0.7092.0 ± 0.4091.2 ± 0.40.821 ± 0.006
aShen’s dataset contains two subdatasets, C and D, which are available at http://www.csbio.sjtu.edu.cn/bioinf/LR_PPI/Data.htm;
bThese results are taken from Table 4 of the literature [25].
Table 6. Amino acid groups classified according to their physicochemical value.
Table 6. Amino acid groups classified according to their physicochemical value.
Physicochemical propertyGroup 1Group 2Group 3
HydrophobicityH1: R,K,E,D,Q,NH2: G,A,S,T,P,H,YH3: C,V,L,I,M,F,W
van der Waals volumeV1: G,A,S,C,T,P,DV2: N,V,E,Q,I,LV3: M,H,K,F,R,Y,W
PolarityP1: L,I,F,W,C,M,V,YP2: P,A,T,G,SP3: H,Q,R,K,N,E,D
PolarizabilityZ1: G,A,S,D,TZ2: C,P,N,V,E,Q,I,LZ3: K,M,H,F,R,Y,W

Share and Cite

MDPI and ACS Style

Zhang, S.-W.; Hao, L.-Y.; Zhang, T.-H. Prediction of Protein–Protein Interaction with Pairwise Kernel Support Vector Machine. Int. J. Mol. Sci. 2014, 15, 3220-3233. https://doi.org/10.3390/ijms15023220

AMA Style

Zhang S-W, Hao L-Y, Zhang T-H. Prediction of Protein–Protein Interaction with Pairwise Kernel Support Vector Machine. International Journal of Molecular Sciences. 2014; 15(2):3220-3233. https://doi.org/10.3390/ijms15023220

Chicago/Turabian Style

Zhang, Shao-Wu, Li-Yang Hao, and Ting-He Zhang. 2014. "Prediction of Protein–Protein Interaction with Pairwise Kernel Support Vector Machine" International Journal of Molecular Sciences 15, no. 2: 3220-3233. https://doi.org/10.3390/ijms15023220

Article Metrics

Back to TopTop