Next Article in Journal
Network Modeling Approaches and Applications to Unravelling Non-Alcoholic Fatty Liver Disease
Previous Article in Journal
Long Noncoding RNA from PVT1 Exon 9 Is Overexpressed in Prostate Cancer and Induces Malignant Transformation and Castration Resistance in Prostate Epithelial Cells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SXGBsite: Prediction of Protein–Ligand Binding Sites Using Sequence Information and Extreme Gradient Boosting

School of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Genes 2019, 10(12), 965; https://doi.org/10.3390/genes10120965
Submission received: 5 September 2019 / Revised: 19 October 2019 / Accepted: 19 November 2019 / Published: 22 November 2019
(This article belongs to the Section Technologies and Resources for Genetics)

Abstract

:
The prediction of protein–ligand binding sites is important in drug discovery and drug design. Protein–ligand binding site prediction computational methods are inexpensive and fast compared with experimental methods. This paper proposes a new computational method, SXGBsite, which includes the synthetic minority over-sampling technique (SMOTE) and the Extreme Gradient Boosting (XGBoost). SXGBsite uses the position-specific scoring matrix discrete cosine transform (PSSM-DCT) and predicted solvent accessibility (PSA) to extract features containing sequence information. A new balanced dataset was generated by SMOTE to improve classifier performance, and a prediction model was constructed using XGBoost. The parallel computing and regularization techniques enabled high-quality and fast predictions and mitigated overfitting caused by SMOTE. An evaluation using 12 different types of ligand binding site independent test sets showed that SXGBsite performs similarly to the existing methods on eight of the independent test sets with a faster computation time. SXGBsite may be applied as a complement to biological experiments.

1. Introduction

Accurate prediction of protein–ligand binding sites is important for understanding protein function and drug design [1,2,3,4]. The experiment-based three-dimensional (3D) structure recognition of protein–ligand complexes and binding sites is relatively expensive and time consuming [5,6]. Computational methods can predict binding sites rapidly and can be applied as a supplement to experimental methods. Structure-based methods, sequence-based methods, and hybrid methods are the commonly applied computation methods [7,8].
The structure-based methods are usually applied to predict ligand binding sites with known 3D protein structures [2,9,10,11]. We focused on the sequence-based method without 3D structure information, and only a few structure-based methods are listed due to the rapid update of these different methods. Pockets on the protein surface can be identified by computing geometric measures, such as LIGSITECSC [2,12], CASTp [13,14,15,16], LigDig [17], and Fpocket [18,19]. LIGSITECSC [2,12] identifies pockets through the number of surface–solvent–surface events and clusters. CASTp [13,14,15,16] locates and measures pockets on 3D protein structures and annotates functional information for specific residues. Unlike traditional protein-centric approaches, LigDig [17] is a ligand-centric approach that identifies pockets using information from PDB [20], UniProt [21], PubChem [22], ChEBI [23], and KEGG [24]. Fpocket [18,19] identifies pockets using structure-based virtual screening (SBVS). RF-Score-VS [25] improves the performance of SBVS and can be used in the open source ODDT [26,27]. FunFOLD [1] introduces cluster identification and residue selection to automatically predict ligand binding residues. CHED [28] constructs a model to predict metal-binding sites using geometric information and machine learning methods. The integration of sequence information in structure-based methods helps improve prediction performance [29,30,31]. ConCavity [29] integrates sequence evolution information and structure information to recognize pockets. COACH [30] and HemeBIND [31] construct prediction models and identify ligand binding sites using sequence and structural information features based on machine learning methods. In general, structure-based methods and hybrid methods enable high-quality predictions when 3D structures of protein–ligand complexes are known [8].
Sequence-based methods can predict protein–ligand binding sites with unknown 3D structures [5,32,33,34]. MetaDBSite [32] integrates six methods, including DISIS [35], DNABindR [36], BindN [37], BindN-rf [38], DP-Bind [39], and DBS-PRED [40], and produces better results than each of the methods alone. DNABR [5] introduces sequence characteristics based on the random forest method [41] to study the sequence characteristics that delineate the physicochemical properties of amino acids. Both SVMPred [33] and NsitePred [34] construct support vector machine (SVM) [42] prediction models using multiple features including position-specific scoring matrix (PSSM), predicted solvent accessibility (PSA), predicted secondary structure (PSS), and predicted dihedral angles. TargetS [7] considers the ligand-specific binding propensity feature and builds models using a scheme of under-sampling and ensemble SVMs. EC-RUS [8] selects position-specific scoring matrix discrete cosine transform (PSSM-DCT) and PSA as features, constructs prediction models using under-sampling and ensemble classifiers, and compares the prediction quality of weighted sparse representation based classifier (WSRC) [43] and SVM.
One machine learning model in the ensemble classifiers is usually built with a dataset generated by under-sampling, and a new model is built after the end of the building process of the previous model. In this paper, this process is called the serial method, and performs well among the sequence-based methods at present but requires more computation time [8,44]. Here, we propose a new parallel method for predicting protein–ligand binding site residues using the evolutionary conservation information of homologous proteins. The main information source used for predictions is the PSSM of sequences. The prediction model of binding residues is constructed by XGBoost machine learning method [45] with the synthetic minority over-sampling technique (SMOTE) [46], and this method reduces the computation time while ensuring prediction quality. We compared the prediction qualities of different feature combination schemes of PSSM-DCT [8,47,48,49], PSSM-discrete wavelet transform (DWT) [49,50,51] and PSA [52], and PSSM-DCT + PSA scheme was selected. For the dataset imbalance problem, XGBoost with SMOTE was applied to construct the protein–ligand binding site prediction models, and the optimal parameters were determined by five-fold cross-validation and a grid search method. The models were validated on 12 different types of protein–ligand binding site datasets. The SXGBsite process is shown in Figure 1.

2. Materials and Methods

2.1. Benchmark Datasets

The benchmark datasets were constructed based on the BioLip database [53] developed by Yu et al. [7], including the training and independent test datasets of 12 different ligands. The 12 types of ligands used were five types of nucleotides, five types of metal ions, DNA and Heme (Table 1). The source code and datasets are available at https://github.com/Lightness7/SXGBsite.

2.2. Feature Extraction

2.2.1. Position-Specific Scoring Matrix

The position-specific scoring matrix (PSSM) encodes the evolution information of the protein sequence. The PSSM of each sequence was obtained using PSI-BLAST [54] in the database of non-redundant protein sequences (nr) with three iterations and the E-value of 0.001. The PSSM is a matrix of L × 20, where L rows represent L amino acid residues in the protein sequence, 20 columns represent the probability that each residue is mutated to 20 native residues, as follows:
PSSM = [ P 1 , 1 P 1 , 2 P 1 , 20 P 2 , 1 P 2 , 2 P 2 , 20 P L , 1 P L , 2 P L , 20 ]
The PSSM feature of contiguous residues was extracted with a sliding window with size w. The window was centered on the target residue and contained (w − 1)/2 adjacent residues on both sides. The size of the PSSM feature matrix was w × 20, and the residue sparse evolution image [8,48] is shown in Figure 2. The window size w = 17 was selected after testing different values of w, and the dimensions of the PSSM feature were 17 × 20 = 340.

2.2.2. Discrete Cosine Transform

Discrete Cosine Transform (DCT) [47] is widely applied for lossy data compression of signals and images. In this study, we used DCT to concentrate the information of PSSM into a few coefficients. For a given input matrix M a t m × n , DCT is defined as:
D C T ( i , j ) = a i a j m = 0 M 1 n = 0 N 1 M a t ( m , n ) cos π ( 2 m + 1 ) i 2 M × cos π ( 2 n + 1 ) j 2 N , 0 i M ,       0 j N ,
where
a i = { 1 M ,     i = 0 2 M ,     1 i M 1 a j = { 1 N ,     j = 0 2 N ,     1 j N 1
The compressed PSSM feature of the residue was obtained by using DCT on the PSSM feature matrix. Most of the information after PSSM-DCT was concentrated in the low-frequency part of the compressed PSSM. The first r rows of the compressed PSSM were reserved as the PSSM-DCT feature, and the dimensions of the PSSM-DCT feature were r × 20.

2.2.3. Discrete Wavelet Transform

Discrete Wavelet Transform (DWT) [49] can decompose discrete sequences into high- and low-frequency coefficients. Four-level DWT [50] was applied to acquire the first five discrete cosine coefficients, the standard deviation, mean, and maximum and minimum values of different scales, as shown in Figure 3. The PSSM-DWT feature of the residue was obtained from the PSSM feature via four-level DWT, and the PSSM-DWT feature had 1040 dimensions.

2.2.4. Predicting Solvent Accessibility

Solvent accessibility [52] is related to the spatial arrangement and packing of residues during the protein folding process, which is an effective feature for protein–ligand binding site prediction [8,33,34]. We used the solvent accessibility prediction of proteins by nearest neighbor method (Sann) to obtain the PSA feature of residues [55], and the PSA feature had three dimensions.

2.3. SMOTE Over-Sampling

As a common method for tackling unbalanced data, SMOTE over-samples the minority class by synthesizing new samples, under-samples the majority class, and provides better classifier performance within the receiver operating characteristic (ROC) space [45,56]. A balanced sample set is generated from the unbalanced sample set through feature extraction by SMOTE. After a series of tests, a new sample set with better results was constructed with the same positive and negative sample number: 19,000.

2.4. Extreme Gradient Boosting Algorithm

Extreme Gradient Boosting (XGBoost) algorithm [46] is an improvement on the Gradient Boosting algorithm [57] by Chen et al. and is characterized by fast calculation and high prediction accuracy. XGBoost is widely used by data scientists in multiple applications and has provided advanced results [58,59]. The training set after feature extraction and SMOTE x i ( x i = { x 1 , x 2 , , x m } , i = 1 , 2 , , n     ) was input into the K additive functions of XGBoost to build the model. The prediction result of the independent test set y i ( y i = { 0 , 1 } , i = 1 , 2 , , s , where 0 represents non-binding residues and 1 represents binding residues) was output as follows:
y ^ i = k = 1 K f k ( x i ) ,         f k F
where f k is each independent tree function with leaf weights and F is the tree ensemble containing each function of the tree. XGBoost avoids large models with the following regularized objective formula:
( ϕ ) = i l ( y ^ i , y i ) + k Ω ( f k )
where l is a differentiable convex loss function that measures the closeness of the prediction y ^ i and the target y i , and Ω is a regular term that penalizes model complexity by greedily adding f t to improve the tree ensemble model. The regular term avoids overfitting by penalizing leaf weights, and the Ω penalty function is as follows:
Ω ( f ) = γ T + 1 2 λ ω 2
where T is the number of leaves, ω is the leaf weights, and the regularization coefficients γ and λ are constants. The traditional GBDT only uses the first-order information of the loss function, whereas the second-order Taylor expansion was introduced into the loss function of XGBoost to optimize the function rapidly [60]. The simplified objective function of step t is:
˜ ( t ) = i = 1 n [ g i f t ( x i ) + 1 2 h i f t 2 ( x i ) ] + Ω ( f t )
where g i = y ^ ( t 1 ) l ( y i , y ^ ( t 1 ) ) and h i = y ^ ( t 1 ) 2 l ( y i , y ^ ( t 1 ) ) represent the first- and second-order gradient statistics of the loss function, respectively. I j = { i | q ( x i = j ) } is defined as a sample set of leaf j, simplified Equation (7) is:
˜ ( t ) = i = 1 n [ g i f t ( x i ) + 1 2 h i f t 2 ( x i ) ] + γ T + 1 2 λ j = 1 T ω j 2 = j = 1 T [ ( i I j g i ) ω j + 1 2 ( i I j h i + λ ) ω j 2 ] + γ T
The optimal weight ω j * of leaf j and the corresponding objective function value are calculated by:
ω j * = i I j g i i I j h i + λ
    ˜ ( t ) ( q ) = 1 2 j = 1 T ( i I j g i ) 2 i I j h i + λ + γ T
The above equation provides the best split of the node. Supposing I L and I R are the left and right split nodes of the sample set I of the leaf,   I = I L I R , respectively, the loss reduction after splitting is expressed as:
s p l i t = 1 2 [ ( i I L g i ) 2 i I L h i + λ + ( i I R g i ) 2 i I R h i + λ ( i I g i ) 2 i I h i + λ ] γ
To prevent overfitting, XGBoost uses shrinkage and column (feature) subsampling techniques, as well as the regularized objective [57].

3. Results

The performance of classification was evaluated on the specificity (SP), sensitivity (SN), accuracy (ACC), and Matthews correlation coefficient (MCC). The overall prediction quality of a binary model was evaluated using the area under the receiver operating characteristic curve (AUC). The formulas used to determine SN, SP, ACC, and MCC are, respectively, as follows:
SP = T N T N + F P
  SN = T P T P + F N
  ACC = T P + T N T P + F P + T N + F N
  MCC = T P × T N F P × F N ( T P + F N ) × ( T N + F P ) × ( T P + F P ) × ( T N + F N )
where TP, FP, TN, and FN represent true positive, false positive, true negative, and false negative, respectively.

3.1. Parameter Selection

ACC is insufficient for performance evaluation in unbalanced learning [7,8], MCC is suitable for quality assessment in sequence-based predictions [3], and AUC is usually used to assess the overall prediction quality of models. The value of MCC changes with the threshold, whereas the AUC value is not affected by the threshold value. We evaluated the prediction performance using MCC and AUC, and the threshold of the probability value was selected by maximizing the value of MCC. The value of MCC was used to select the first r rows of the PSSM-DCT matrix as feature on the guanosine triphosphate (GTP) training and independent test sets. PSSM-DCT obtained the optimal value of MCC when r was 9, as shown in Figure 4, and the dimensions of the PSSM-DCT feature were 9 × 20 = 180.
The size of the positive and negative sample sets after SMOTE is usually an integer multiple of the positive sample size in the original dataset, and the prediction quality may be affected by the amplification ratio of the positive sample sets. In this study, a fixed-size positive and negative sample set was generated by SMOTE to improve the prediction quality, and the optimal sample number was selected according to the value of MCC on the GTP training and independent test sets. The best value of MCC was obtained when the number of positive and negative samples was 19,000, as shown in Figure 5.
The parameters of XGBoost were adjusted with five-fold cross-validation and a grid search method on the GTP training set.

3.2. Method Selection

Different feature combinations of PSSM, PSSM-DCT, PSSM-DWT, and PSA were used to evaluate the prediction performance using the GTP training and independent test sets, PSSM-DCT + PSA produced the optimal MCC and AUC values (Table 2), and receiver operating characteristic curve (ROC) of different feature combinations is shown in Figure 6. As shown in Table 2, PSSM performed better in terms of AUC than PSSM-DCT and PSSM-DWT, and PSA (3-D) improved PSSM (340-D), PSSM-DCT (180-D), and PSSM-DWT (1040-D) by 0.14, 0.22 and 0.09, respectively. The relationship between the increase in AUC and the feature dimensions indicated that the prediction quality using PSA improved more for features with fewer dimensions (PSSM and PSSM-DCT). PSSM + PSA and PSSM-DCT + PSA performed almost the same in terms of AUC, and we tended to improve prediction quality by over-sampling in the comparison of feature combinations. The prediction qualities of PSSM and PSSM + PSA were more dependent on threshold moving, and the difference in MCC between the default threshold (0.500) and the maximum MCC threshold demonstrated the effect of threshold moving.
Three sampling schemes were used on the GTP training set to obtain three different training sets, including the entire GTP training set, the training set after random under-sampling (RUS), and the training set after SMOTE over-sampling. On the GTP independent test set, the prediction qualities of the models constructed by the three training sets are shown in Table 3, and receiver operating characteristic curve (ROC) of different sampling and classification algorithms is shown in Figure 7. SMOTE + XGBoost achieved the best prediction quality, performing better than SMOTE + SVM.

3.3. Results of Training Sets

The performance of SXGBsite was evaluated using five-fold cross-validation on the training sets. The results with the threshold of 0.5 and the maximized the MCC value are listed in Table 4. The five-fold cross-validation results are basically consistent with the maximized MCC threshold results of TargetS and EC-RUS. Regardless of the impact of the threshold, the results in Table 4 show the different characteristics of the two schemes for the class imbalance problem by comparing the default threshold (0.500) results of SXGBsite and EC-RUS, which use the same features. The RUS + ensemble classifiers scheme was more sensitive to positive samples and had information loss for negative samples. The SMOTE + XGBoost scheme reduced the information loss, the positive samples in the training set were mostly synthesized, and the sensitivity to positive samples was lower.

3.4. Comparison with Existing Methods

In terms of the independent test sets of the five nucleotides, SXGBsite is compared with TargetS, SVMPred, NsitePred, EC-RUS, and the alignment-based baseline predictor in Table 5. The results of TargetS, SVMPred, NsitePred, and EC-RUS are the threshold of maximizing the MCC value. In terms of the ATP, ADP, AMP, GDP, and GTP independent test sets, the metrics of the best prediction quality refer to the AUC of TargetS and the MCC of EC-RUS. The differences between SXGBsite and TargetS for the AUC are 0.018 (0.880 to 0.898), 0.011 (0.885 to 0.896), 0.007 (0.823 to 0.830), 0.002 (0.894 to 0.896), and 0.015 (0.870 over 0.855), respectively, and the differences between SXGBsite and EC-RUS for the MCC are 0.043 (0.463 to 0.506), 0.023 (0.488 to 0.511), 0.065 (0.328 to 0.393), 0.003 (0.576 to 0.579), and 0.009 (0.650 over 0.641), respectively. The difference between SXGBsite and the best prediction quality is small for the AUC and relatively large for the MCC.
On the independent test sets of the five metal ions, SXGBsite is compared with TargetS, FunFOLD, CHED, EC-RUS, and the alignment-based baseline predictor in Table 6. The results of TargetS, FunFOLD, CHED, and EC-RUS are the threshold of maximizing the MCC value. In terms of the independent test sets of Ca 2 + , Mg 2 + , Mn 2 + , Fe 3 + , and Zn 2 + , the differences between SXGBsite and the best prediction quality for the AUC are 0.021 (0.758 to 0.779), 0.001 (0.779 to 0.780), 0.032 (0.856 to 0.888), 0.054 (0.891 to 0.945), and 0.052 (0.906 to 0.958), respectively, and the differences between SXGBsite and the best prediction quality for the MCC are 0.046 (0.197 to 0.243), 0.026 (0.291 to 0.317), 0.067 (0.382 to 0.449), 0.094 (0.396 to 0.490), and 0.137 (0.390 to 0.527), respectively. SXGBsite showed good prediction performance on the Mg 2 + independent test set, and the reasons for the unsatisfactory performance on the metal ion independent test sets may be as follows: (1) TargetS uses the ligand-specific binding propensity feature to improve the prediction quality, and the features used in this study did not perform well for predicting metal ion binding residues; and (2) the volume of metal ions is smaller than that of nucleotides, which means that there are fewer binding residues (positive samples), and the lack of positive samples affected the prediction quality of the model.
Compared with TargetS, MetaDBSite, DNABR, EC-RUS, and the alignment-based baseline predictor on the DNA independent test set (Table 7), SXGBsite achieved an MCC value lower than those of TargetS and EC-RUS, and an inferior AUC value to TargetS.
Compared with TargetS, HemeBind, EC-RUS, and the alignment-based baseline predictor on the Heme independent test set (Table 8), SXGBsite achieved inferior MCC and AUC values to EC-RUS.
The prediction performance of SXGBsite was similar to those of the best two methods, TargetS and EC-RUS, on the independent test sets of the five nucleotides, Mg 2 + , DNA, and Heme. Both TargetS and EC-RUS are serial combinations of under-sampling and ensemble classifiers, which requires long calculation times. SXGBsite is a method of over-sampling and a single XGBoost classifier to quickly build high quality prediction models.

3.5. Running Time Comparison

The running time comparison of SXGBsite, EC-RUS (SVM), and EC-RUS (WSRC) on the independent test sets is provided in Table 9, and the benchmark in this study is the EC-RUC (SVM) running time. EC-RUS is a sequence-based method that was proposed by Ding et al., and its prediction quality was excellent. Ding et al. selected 19 sub-classifiers in the ensemble classifier, compared the results of ensemble SVMs and ensemble WSRCs, and concluded that ensemble WSRCs are more time-consuming than ensemble SVMs. Both SXGBsite and EC-RUS used the feature of PSSM-DCT + PSA, and the prediction model was built by SMOTE + XGBoost and RUS + ensemble classifiers, respectively. Due to having the same features, the results in Table 9 also show the running time comparison of SMOTE + XGBoost and RUS + ensemble classifiers, which means that two schemes for the class imbalance problem.

3.6. Comparison with Existing Methods on the PDNA-41 Independent Test Set

Different from the previous protein–DNA binding site dataset, PDNA-543 (9549 binding residues and 134,995 non-binding residues) and PDNA-41 (734 binding residues and 14,021 non-binding residues) are datasets constructed by Hu et al. [61]. SXGBsite constructed the prediction model by the PDNA-543 training set, obtained prediction results on the PDNA-41 independent test set, and the comparison of SXGBsite with BindN [37], ProteDNA [62], BindN+ [63], MetaDBSite [32], DP-Bind [39], DNABind [64], TargetDNA [61], and EC-RUS(DNA) [44] is provided in Table 10. SXGBsite achieved the best MCC (0.272) under Sen Spec, and achieved MCC after EC-RUS(DNA) and TargetDNA under FPR 5% (FPR = 1 - SP). The best MCC (0.279) of SXGBsite is achieved under FPR 10%.

3.7. Case Study

The prediction results of SXGBsite are shown in the 3D models in Figure 8, and the protein–ligand complexes of 2Y4K-A and 2Y6P-A belong to the independent test sets of GDP and Mg 2 + , respectively.

4. Discussion

Many excellent computational methods are available in the field of protein–ligand binding site prediction; however, prediction efficiency can still be improved [8]. As the actual acquired protein–ligand binding site data show many fewer binding sites than non-binding sites, we selected unbalanced datasets of 12 different ligand types constructed by Yu et al. as the benchmark datasets. The adverse effects of unbalanced data on predictions are usually mitigated by over- or under-sampling methods, which are widely applied, and ensemble classifiers are often used together to overcome the loss of information caused by under-sampling. Both TargetS and EC-RUS performed well on the independent test sets built by Yu et al. by applying the scheme of under-sampling and ensemble classifiers. Although the loss of information by multiple under-sampling can be reduced by ensemble classifiers, serial combinations of multiple machine learning algorithms and high-dimensional features increase the computation time.
SXGBsite uses the features of PSSM-DCT + PSA and XGBoost with SMOTE to build prediction models, and Extreme Gradient Boosting algorithm developed by Chen et al. [46] was applied to solve overfitting and large sample sets caused by over-sampling. XGBoost’s regularization technology overcomes the overfitting problem, and parallel computing can be used to quickly construct prediction models with large sample sets, which constitute the basis of SXGBsite. The threshold moving was used in this study to obtain the best MCC for comparison with other existing methods. The use of both threshold moving and sampling methods complicated the interpretation of the results, and the AUC measure without threshold change was used to better evaluate the prediction quality difference between SMOTE + XGBoost and RUS + ensemble classifiers. On the independent test sets of five nucleotides, Mg 2 + , DNA, and Heme, the difference between the AUC of SXGBsite and the best AUC was within 0.020. Considering the decrease in the running time, we think that the difference in AUC is acceptable. On the independent test sets of 12 ligands, the new method proposed here produced a higher prediction quality with a shorter computation time using the two features and a single classifier, and produced similar results to the best-performing TargetS and EC-RUS on 8 of the 12 independent test sets.

5. Conclusions

This paper proposes a new computational method, SXGBsite. Sequence information was used for the protein–ligand binding site prediction, and features extracted by PSSM-DCT+PSA and XGBoost with SMOTE were used to construct the prediction model. On the independent test sets of 12 different ligands, SXGBsite performed similarly to the best methods on the datasets with less computation time, which could be a complement of biological experiments as well as cost reductions. The features we used did not perform well on the metal ion datasets, and adding features with better prediction performance is the next step of the study.

Author Contributions

Conceptualization, Z.Z., and Y.X.; methodology, Z.Z.; software, Z.Z.; validation, Z.Z., and Y.Z.; writing—original draft preparation, Z.Z.; and writing—review and editing, Z.Z., Y.X., and Y.Z.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roche, D.B.; Tetchner, S.J.; McGuffin, L.J. FunFOLD: An improved automated method for the prediction of ligand binding residues using 3D models of proteins. BMC Bioinform. 2011, 12, 160. [Google Scholar] [CrossRef]
  2. Hendlich, M.; Rippmann, F.; Barnickel, G. LIGSITE: Automatic and efficient detection of potential small molecule-binding sites in proteins. J. Mol. Graph. Model. 1997, 15, 359–363. [Google Scholar] [CrossRef]
  3. Roche, D.B.; Brackenridge, D.A.; McGuffin, L.J. Proteins and their interacting partners: An introduction to protein–ligand binding site prediction methods. Int. J. Mol. Sci. 2015, 16, 29829–29842. [Google Scholar] [CrossRef] [PubMed]
  4. Rose, P.W.; Prlić, A.; Bi, C.; Bluhm, W.F.; Christie, C.H.; Dutta, S.; Green, R.K.; Goodsell, D.S.; Westbrook, J.D.; Woo, J.; et al. The RCSB Protein Data Bank: Views of structural biology for basic and applied research and education. Nucleic Acids Res. 2015, 43, D345–D356. [Google Scholar] [CrossRef] [PubMed]
  5. Ma, X.; Guo, J.; Liu, H.D.; Xie, J.M.; Sun, X. Sequence-based prediction of DNA-binding residues in proteins with conservation and correlation information. IEEE/ACM Trans. Comput. Biol. Bioinform. 2012, 9, 1766–1775. [Google Scholar] [CrossRef]
  6. Ding, Y.; Tang, J.; Guo, F. Identification of protein–protein interactions via a novel matrix-based sequence representation model with amino acid contact information. Int. J. Mol. Sci. 2016, 17, 1623. [Google Scholar] [CrossRef]
  7. Yu, D.J.; Hu, J.; Yang, J.; Shen, H.B.; Tang, J.; Yang, J.Y. Designing template-free predictor for targeting protein-ligand binding sites with classifier ensemble and spatial clustering. IEEE/ACM Trans. Comput. Biol. Bioinform. 2013, 10, 994–1008. [Google Scholar] [CrossRef]
  8. Ding, Y.; Tang, J.; Guo, F. Identification of protein–ligand binding sites by sequence information and ensemble classifier. J. Chem. Inf. Model. 2017, 57, 3149–3161. [Google Scholar] [CrossRef]
  9. Levitt, D.G.; Banaszak, L.J. POCKET: A computer graphies method for identifying and displaying protein cavities and their surrounding amino acids. J. Mol. Graph. 1992, 10, 229–234. [Google Scholar] [CrossRef]
  10. Laskowski, R.A. SURFNET: A program for visualizing molecular surfaces, cavities, and intermolecular interactions. J. Mol. Graph. Model. 1995, 13, 323–330. [Google Scholar] [CrossRef]
  11. Xie, Z.R.; Hwang, M.J. Methods for Predicting Protein–Ligand Binding Sites. In Molecular Modeling of Proteins; Kukol, A., Ed.; Springer: New York, NY, USA, 2015; Volume 1215, pp. 383–398. [Google Scholar]
  12. Huang, B.; Schroeder, M. LIGSITEcsc: Predicting ligand binding sites using the Connolly surface and degree of conservation. BMC Struct. Biol. 2006, 6, 19. [Google Scholar] [CrossRef] [PubMed]
  13. Liang, J.; Woodward, C.; Edelsbrunner, H. Anatomy of protein pockets and cavities: Measurement of binding site geometry and implications for ligand design. Protein Sci. 1998, 7, 1884–1897. [Google Scholar] [CrossRef] [PubMed]
  14. Binkowski, T.A.; Naghibzadeh, S.; Liang, J. CASTp: Computed atlas of surface topography of proteins. Nucleic Acids Res. 2003, 31, 3352–3355. [Google Scholar] [CrossRef] [PubMed]
  15. Dundas, J.; Ouyang, Z.; Tseng, J.; Binkowski, A.; Turpaz, Y.; Liang, J. CASTp: Computed atlas of surface topography of proteins with structural and topographical mapping of functionally annotated residues. Nucleic Acids Res. 2006, 34, W116–W118. [Google Scholar] [CrossRef]
  16. Tian, W.; Chen, C.; Lei, X.; Zhao, J.; Liang, J. CASTp 3.0: Computed atlas of surface topography of proteins. Nucleic Acids Res. 2018, 46, W363–W367. [Google Scholar] [CrossRef]
  17. Fuller, J.C.; Martinez, M.; Henrich, S.; Stank, A.; Richter, S.; Wade, R.C. LigDig: A web server for querying ligand–protein interactions. Bioinformatics 2014, 31, 1147–1149. [Google Scholar] [CrossRef]
  18. Le Guilloux, V.; Schmidtke, P.; Tuffery, P. Fpocket: An open source platform for ligand pocket detection. BMC Bioinform. 2009, 10, 168. [Google Scholar] [CrossRef]
  19. Schmidtke, P.; Le Guilloux, V.; Maupetit, J.; Tuffery, P. Fpocket: Online tools for protein ensemble pocket detection and tracking. Nucleic Acids Res. 2010, 38, 582–589. [Google Scholar] [CrossRef]
  20. Berman, H.M.; Westbrook, J.; Feng, Z.; Gilliland, G.; Bhat, T.N.; Weissig, H.; Shindyalov, I.N.; Bourne, P.E. The protein data bank. Nucleic Acids Res. 2000, 28, 235–242. [Google Scholar] [CrossRef]
  21. UniProt Consortium. UniProt: A hub for protein information. Nucleic Acids Res. 2015, 43, 204–212. [Google Scholar] [CrossRef]
  22. Bolton, E.E.; Wang, Y.; Thiessen, P.A.; Bryant, S.H. PubChem: Integrated Platform of Small Molecules and Biological Activities. In Annual Reports in Computational Chemistry; Wheeler, R.A., Spellmeyer, D.C., Eds.; Elsevier: Amsterdam, The Netherlands, 2008; Volume 4, pp. 217–241. [Google Scholar]
  23. Hastings, J.; de Matos, P.; Dekker, A.; Ennis, M.; Harsha, B.; Kale, N.; Muthukrishnan, V.; Owen, G.; Turner, S.; Williams, M.; et al. The ChEBi reference database and ontology for biologically relevant chemistry: Enhancements for 2013. Nucleic Acids Res. 2013, 41, 456–463. [Google Scholar] [CrossRef] [PubMed]
  24. Okuda, S.; Yamada, T.; Hamajima, M.; Itoh, M.; Katayama, T.; Bork, P.; Goto, S.; Kanehisa, M. KEGG Atlas mapping for global analysis of metabolic pathways. Nucleic Acids Res. 2008, 36, 423–426. [Google Scholar] [CrossRef] [PubMed]
  25. Wójcikowski, M.; Ballester, P.J.; Siedlecki, P. Performance of machine-learning scoring functions in structure-based virtual screening. Sci. Rep. 2017, 7, 46710. [Google Scholar] [CrossRef]
  26. Wójcikowski, M.; Zielenkiewicz, P.; Siedlecki, P. Open Drug Discovery Toolkit (ODDT): A new open-source player in the drug discovery field. J. Cheminform. 2015, 7, 26. [Google Scholar] [CrossRef] [PubMed]
  27. Wójcikowski, M.; Zielenkiewicz, P.; Siedlecki, P. DiSCuS: An open platform for (not only) virtual screening results management. J. Chem. Inf. Model 2014, 54, 347–354. [Google Scholar] [CrossRef]
  28. Babor, M.; Gerzon, S.; Raveh, B.; Sobolev, V.; Edelman, M. Prediction of transition metal-binding sites from apo protein structures. Proteins 2008, 70, 208–217. [Google Scholar] [CrossRef]
  29. Capra, J.A.; Laskowski, R.A.; Thornton, J.M.; Singh, M.; Funkhouser, T.A. Predicting protein ligand binding sites by combining evolutionary sequence conservation and 3D structure. PLoS Comput. Biol. 2009, 5, e1000585. [Google Scholar] [CrossRef]
  30. Yang, J.; Roy, A.; Zhang, Y. Protein–ligand binding site recognition using complementary binding-specific substructure comparison and sequence profile alignment. Bioinformatics 2013, 29, 2588–2595. [Google Scholar] [CrossRef]
  31. Liu, R.; Hu, J. HemeBIND: A novel method for heme binding residue prediction by combining structural and sequence information. BMC Bioinform. 2011, 12, 207. [Google Scholar] [CrossRef]
  32. Si, J.; Zhang, Z.; Lin, B.; Schroeder, M.; Huang, B. MetaDBSite: A meta approach to improve protein DNA-binding sites prediction. BMC Syst. Biol. 2011, 5, S7. [Google Scholar] [CrossRef]
  33. Chen, K.; Mizianty, M.J.; Kurgan, L. ATPsite: Sequence-based prediction of ATP-binding residues. Proteome Sci. 2011, 9, S4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Chen, K.; Mizianty, M.J.; Kurgan, L. Prediction and analysis of nucleotide-binding residues using sequence and sequence-derived structural descriptors. Bioinformatics 2011, 28, 331–341. [Google Scholar] [CrossRef] [PubMed]
  35. Ofran, Y.; Mysore, V.; Rost, B. Prediction of DNA-binding residues from sequence. Bioinformatics 2007, 23, i347–i353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Yan, C.; Terribilini, M.; Wu, F.; Jernigan, R.L.; Dobbs, D.; Honavar, V. Predicting DNA-binding sites of proteins from amino acid sequence. BMC Bioinform. 2006, 7, 262. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Wang, L.; Brown, S.J. BindN: A web-based tool for efficient prediction of DNA and RNA binding sites in amino acid sequences. Nucleic Acids Res. 2006, 34, W243–W248. [Google Scholar] [CrossRef] [Green Version]
  38. Wang, L.; Yang, M.Q.; Yang, J.Y. Prediction of DNA-binding residues from protein sequence information using random forests. BMC Genom. 2009, 10, S1. [Google Scholar] [CrossRef]
  39. Hwang, S.; Gou, Z.; Kuznetsov, I.B. DP-Bind: A web server for sequence-based prediction of DNA-binding residues in DNA-binding proteins. Bioinformatics 2007, 23, 634–636. [Google Scholar] [CrossRef] [Green Version]
  40. Ahmad, S.; Gromiha, M.M.; Sarai, A. Analysis and prediction of DNA-binding proteins and their binding residues based on composition, sequence and structural information. Bioinformatics 2004, 20, 477–486. [Google Scholar] [CrossRef] [Green Version]
  41. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  42. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  43. Lu, C.Y.; Min, H.; Gui, J.; Zhu, L.; Lei, Y.K. Face recognition via weighted sparse representation. J. Vis. Commun. Image Represent. 2013, 24, 111–116. [Google Scholar] [CrossRef]
  44. Shen, C.; Ding, Y.; Tang, J.; Song, J.; Guo, F. Identification of DNA–protein binding sites through multi-scale local average blocks on sequence information. Molecules 2017, 22, 2079. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Chen, T.; Guestrin, C. Xgboost: A Scalable Tree Boosting System. In Proceedings of the 22nd Acm sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  46. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  47. Ahmed, N.; Natarajan, T.; Rao, K.R. Discrete cosine transform. IEEE T. Comput. 1974, 100, 90–93. [Google Scholar] [CrossRef]
  48. Yu, D.J.; Hu, J.; Huang, Y.; Shen, H.B.; Qi, Y.; Tang, Z.M.; Yang, J.Y. TargetATPsite: A template-free method for ATP-binding sites prediction with residue evolution image sparse representation and classifier ensemble. J. Comput. Chem. 2013, 34, 974–985. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Nanni, L.; Lumini, A.; Brahnam, S. An empirical study of different approaches for protein classification. Sci. World J. 2014, 2014, 1–17. [Google Scholar] [CrossRef]
  50. Nanni, L.; Brahnam, S.; Lumini, A. Wavelet images and Chou′s pseudo amino acid composition for protein classification. Amino Acids 2012, 43, 657–665. [Google Scholar] [CrossRef]
  51. Wang, Y.; Ding, Y.; Guo, F.; Wei, L.; Tang, J. Improved detection of DNA-binding proteins via compression technology on PSSM information. PLoS ONE 2017, 12, e0185587. [Google Scholar] [CrossRef] [Green Version]
  52. Ahmad, S.; Gromiha, M.M.; Sarai, A. Real value prediction of solvent accessibility from amino acid sequence. Proteins 2003, 50, 629–635. [Google Scholar] [CrossRef]
  53. Yang, J.; Roy, A.; Zhang, Y. BioLiP: A semi-manually curated database for biologically relevant ligand–protein interactions. Nucleic Acids Res. 2012, 41, D1096–D1103. [Google Scholar] [CrossRef] [Green Version]
  54. Altschul, S.F.; Madden, T.L.; Schäffer, A.A.; Zhang, J.; Zhang, Z.; Miller, W.; Lipman, D.J. Gapped BLAST and PSI-BLAST: A new generation of protein database search programs. Nucleic Acids Res. 1997, 25, 3389–3402. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Joo, K.; Lee, S.J.; Lee, J. Sann: Solvent accessibility prediction of proteins by nearest neighbor method. Proteins 2012, 80, 1791–1797. [Google Scholar] [CrossRef] [PubMed]
  56. Hu, J.; He, X.; Yu, D.J.; Yang, X.B.; Yang, J.Y.; Shen, H.B. A new supervised over-sampling algorithm with application to protein-nucleotide binding residue prediction. PLoS ONE 2014, 9, e107676. [Google Scholar] [CrossRef] [PubMed]
  57. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  58. Deng, L.; Sui, Y.; Zhang, J. XGBPRH: Prediction of Binding Hot Spots at Protein–RNA Interfaces Utilizing Extreme Gradient Boosting. Genes 2019, 10, 242. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Wang, H.; Liu, C.; Deng, L. Enhanced prediction of hot spots at protein-protein interfaces using extreme gradient boosting. Sci. Rep. 2018, 8, 14285. [Google Scholar] [CrossRef]
  60. Friedman, J.; Hastie, T.; Tibshirani, R. Additive logistic regression: A statistical view of boosting (with discussion and a rejoinder by the authors). Ann Stat 2000, 28, 337–407. [Google Scholar] [CrossRef]
  61. Hu, J.; Li, Y.; Zhang, M.; Yang, X.; Shen, H.B.; Yu, D.J. Predicting protein-DNA binding residues by weightedly combining sequence-based features and boosting multiple SVMs. IEEE/ACM Trans. Comput. Biol. Bioinform. 2017, 14, 1389–1398. [Google Scholar] [CrossRef]
  62. Chu, W.Y.; Huang, Y.F.; Huang, C.C.; Cheng, Y.S.; Huang, C.K.; Oyang, Y.J. ProteDNA: A sequence-based predictor of sequence-specific DNA-binding residues in transcription factors. Nucleic Acid Res. 2009, 37, 396–401. [Google Scholar] [CrossRef] [Green Version]
  63. Wang, L.; Huang, C.; Yang, M.Q.; Yang, J.Y. BindN+ for accurate prediction of DNA and RNA-binding residues from protein sequence features. BMC Syst. Biol. 2010, 4, 1–9. [Google Scholar] [CrossRef] [Green Version]
  64. Li, B.Q.; Feng, K.Y.; Ding, J.; Cai, Y.D. Predicting DNA-binding sites of proteins based on sequential and 3D structural information. Mol. Genet. Genom. 2014, 289, 489–499. [Google Scholar] [CrossRef] [PubMed]
Figure 1. SXGBsite Flowchart. During the training process, the position-specific scoring matrix (PSSM) feature of residues was represented by the sparse evolution image, discrete cosine transform (DCT) compressed the PSSM feature to obtain the PSSM-DCT feature, and the predicted solvent accessibility (PSA) feature was used to improve the prediction quality. SMOTE generated a new balanced training set with the training set of PSSM-DCT + PSA features, and the prediction model of binding residues was constructed by the balanced training set and XGBoost. During the testing process, the unbalanced independent test set, which also extracted the PSSM-DCT + PSA features, was input into the prediction model to obtain the result.
Figure 1. SXGBsite Flowchart. During the training process, the position-specific scoring matrix (PSSM) feature of residues was represented by the sparse evolution image, discrete cosine transform (DCT) compressed the PSSM feature to obtain the PSSM-DCT feature, and the predicted solvent accessibility (PSA) feature was used to improve the prediction quality. SMOTE generated a new balanced training set with the training set of PSSM-DCT + PSA features, and the prediction model of binding residues was constructed by the balanced training set and XGBoost. During the testing process, the unbalanced independent test set, which also extracted the PSSM-DCT + PSA features, was input into the prediction model to obtain the result.
Genes 10 00965 g001
Figure 2. Residue sparse evolution image.
Figure 2. Residue sparse evolution image.
Genes 10 00965 g002
Figure 3. Four-level discrete wavelet transform (DWT).
Figure 3. Four-level discrete wavelet transform (DWT).
Genes 10 00965 g003
Figure 4. Adjustment of parameters r.
Figure 4. Adjustment of parameters r.
Genes 10 00965 g004
Figure 5. The values of the Matthews correlation coefficient (MCC) corresponding to the number of samples after SMOTE.
Figure 5. The values of the Matthews correlation coefficient (MCC) corresponding to the number of samples after SMOTE.
Genes 10 00965 g005
Figure 6. Receiver Operating Characteristic Curve (ROC) of Different Feature Combinations.
Figure 6. Receiver Operating Characteristic Curve (ROC) of Different Feature Combinations.
Genes 10 00965 g006
Figure 7. ROC of Different Sampling and Classification Algorithms.
Figure 7. ROC of Different Sampling and Classification Algorithms.
Genes 10 00965 g007
Figure 8. Prediction of SXGBsite. The cyan indicates the helix, the folding and ring structure of the protein sequence, and the yellow indicates the ligand; and true and false predictions are indicated in green and red, respectively.
Figure 8. Prediction of SXGBsite. The cyan indicates the helix, the folding and ring structure of the protein sequence, and the yellow indicates the ligand; and true and false predictions are indicated in green and red, respectively.
Genes 10 00965 g008
Table 1. Composition of datasets for the 12 different ligands [7].
Table 1. Composition of datasets for the 12 different ligands [7].
Ligand CategoryLigand TypeTraining DatasetIndependent Test DatasetTotal No. Sequences
No. Sequences(numP,numN)No. Sequences(numP,numN)
NucleotideATP221(3021,72334)50(647,16639)271
ADP296(3833,98740)47(686,20327)343
AMP145(1603,44401)33(392,10355)178
GDP82(1101,26244)14(194,4180)96
GTP54(745,21205)7(89,1868)61
Metal Ion Ca 2 + 965(4914,287801)165(785,53779)1130
Zn 2 + 1168(4705,315235)176(744,47851)1344
Mg 2 + 1138(3860,350716)217(852,72002)1355
Mn 2 + 335(1496,112312)58(237,17484)393
Fe 3 + 173(818,50453)26(120,9092)199
DNA335(6461,71320)52(973,16225)387
HEME206(4380,49768)27(580,8630)233
Note: numP, positive (binding residues) sample numbers; numN, negative (non-binding residues) sample numbers; ATP, adenosine triphosphate; ADP, adenosine diphosphate; AMP, adenosine monophosphate; GDP, guanosine diphosphate; GTP, guanosine triphosphate.
Table 2. Comparison of different feature combinations on the GTP independent test set (average of 10 replicate experiments in SXGBsite with adjusted parameters).
Table 2. Comparison of different feature combinations on the GTP independent test set (average of 10 replicate experiments in SXGBsite with adjusted parameters).
FeatureThresholdSN (%)SP (%)ACC (%)MCCAUC
PSSM0.50034.899.796.80.5360.855
0.13946.199.597.00.5960.855
PSSM-DCT0.50043.899.797.10.6050.848
0.61242.799.897.20.6110.848
PSSM-DWT0.50041.699.797.00.5860.830
0.45843.899.797.10.6050.830
PSSM + PSA0.50037.199.997.00.5810.869
0.10952.899.497.20.6360.869
PSSM-DCT + PSA0.50049.499.697.30.6420.870
0.42150.699.697.40.6500.870
PSSM-DWT + PSA0.50046.199.797.30.6300.839
0.37049.499.697.30.6420.839
PSSM-DCT + PSSM-DWT + PSA0.50044.999.697.10.6070.850
0.54544.999.897.30.6290.850
Note: ACC, accuracy; MCC, Matthews correlation coefficient; AUC, the area under the receiver operating characteristic curve.
Table 3. Comparison of different sampling and classification algorithms on the GTP independent test set (average of 10 replicate experiments in XGBoost with adjusted parameters).
Table 3. Comparison of different sampling and classification algorithms on the GTP independent test set (average of 10 replicate experiments in XGBoost with adjusted parameters).
SchemeThresholdSN (%)SP (%)ACC (%)MCCAUC
XGBoost0.50030.399.896.70.5120.842
0.15337.199.796.90.5560.842
RUS + XGBoost0.50068.584.583.80.2880.827
0.91451.797.995.80.5040.827
SMOTE + XGBoost0.50049.499.697.30.6420.870
0.42150.699.697.40.6500.870
SMOTE + SVM0.50051.799.397.10.6160.838
0.71449.499.597.20.6280.838
Table 4. Performance of SXGBsite (average of 10 replicate experiments) on the training sets after five-fold cross-validation.
Table 4. Performance of SXGBsite (average of 10 replicate experiments) on the training sets after five-fold cross-validation.
LigandPredictorThresholdSN (%)SP (%)ACC (%)MCCAUC
ATPTargetS 10.50048.498.296.20.4920.887
EC-RUS 20.50084.184.984.90.3470.912
0.81458.697.996.40.5370.912
SXGBsite0.50053.496.394.60.4130.886
0.77540.398.696.40.4480.886
ADPTargetS 10.50056.198.897.20.5910.907
EC-RUS 20.50087.887.787.70.3950.939
0.85262.298.697.30.6100.939
SXGBsite0.50061.696.294.90.4590.907
0.83246.498.997.00.5210.907
AMPTargetS 10.50038.098.296.00.3860.856
EC-RUS 20.50081.579.779.80.2630.888
0.83546.798.396.60.4600.888
SXGBsite0.50037.097.895.80.3470.851
0.63632.398.696.40.3660.851
GDPTargetS 10.43063.998.797.20.6440.908
EC-RUS 20.50086.189.889.70.4350.937
0.81667.298.997.60.6760.937
SXGBsite0.50059.499.397.70.6640.930
0.65357.099.597.90.6780.930
GTPTargetS 10.50048.098.796.90.5060.858
EC-RUS 20.50079.585.785.50.3090.896
0.84249.599.297.60.5620.896
SXGBsite0.50042.499.497.60.5400.883
0.68540.799.797.80.5720.883
Ca 2 + TargetS 10.69019.299.798.40.3200.784
EC-RUS 20.50073.973.873.80.1180.812
0.86114.799.798.60.2200.812
SXGBsite0.50032.895.094.20.1350.757
0.81816.399.198.10.1670.757
Mg 2 + TargetS 10.81026.499.899.00.3830.798
EC-RUS 20.50073.879.479.30.1250.839
0.86425.899.899.10.3540.839
SXGBsite0.50046.195.995.50.1960.819
0.92626.399.799.00.3260.819
Mn 2 + TargetS 10.74040.899.598.70.4450.901
EC-RUS 20.50083.486.686.60.2010.921
0.84131.099.698.90.3580.921
SXGBsite0.50045.098.397.70.2970.888
0.75936.199.198.50.3290.888
Fe 3 + TargetS 10.81051.899.698.80.5920.922
EC-RUS 20.50087.190.190.00.2780.940
0.80953.199.298.60.4890.940
SXGBsite0.50048.299.198.50.4400.913
0.49650.199.198.50.4540.913
Zn 2 + TargetS 10.83050.099.698.90.5570.938
EC-RUS 20.50088.790.890.80.2790.958
0.86045.699.398.70.4400.958
SXGBsite0.50059.796.596.10.2990.892
0.89438.599.298.50.3630.892
DNATargetS 10.49041.794.589.90.3620.824
EC-RUS 20.50081.971.872.30.2590.852
0.76348.795.192.60.3780.852
SXGBsite0.50041.092.389.60.2550.827
0.42049.889.287.20.2700.827
HEMETargetS 10.65050.598.394.40.5790.887
EC-RUS 20.50085.083.683.70.4160.922
0.84660.397.595.10.5910.922
SXGBsite0.50059.396.293.80.5200.900
0.80545.398.995.40.5550.900
1 Results excerpted from Yu et al. [7]. 2 Results excerpted from Ding et al. [8].
Table 5. SXGBsite (average of 10 replicate experiments) compared with the existing methods on five nucleotide independent test sets.
Table 5. SXGBsite (average of 10 replicate experiments) compared with the existing methods on five nucleotide independent test sets.
LigandPredictorSN (%)SP (%)ACC (%)MCCAUC
ATPTargetS 150.198.396.50.5020.898
NsitePred 150.897.395.50.439-
SVMPred 147.396.794.90.3870.877
alignment-based 130.697.094.50.265-
EC-RUS 245.498.896.80.5060.871
SXGBsite (T = 0.500)54.695.794.20.3970.880
SXGBsite (T = 0.718)43.798.596.50.4630.880
ADPTargetS 146.998.997.20.5070.896
NsitePred 146.297.696.00.419-
SVMPred 146.197.295.50.3820.875
alignment-based 131.897.495.10.284-
EC-RUS 244.499.297.60.5110.872
SXGBsite (T = 0.500)53.196.995.60.3990.885
SXGBsite (T = 0.844)37.399.597.70.4880.885
AMPTargetS 134.298.295.90.3590.830
NsitePred 133.997.695.30.321-
SVMPred 132.196.494.10.2550.798
alignment-based 119.697.394.50.178-
EC-RUS 224.999.597.00.3930.815
SXGBsite (T = 0.500)36.097.595.40.3250.823
SXGBsite (T = 0.486)37.197.495.30.3280.823
GDPTargetS 156.298.196.20.5500.896
NsitePred 155.797.996.10.536-
SVMPred 149.597.695.40.4660.870
alignment-based 141.297.895.30.415-
EC-RUS 236.699.997.10.5790.872
SXGBsite (T = 0.500)46.499.096.70.5510.894
SXGBsite (T = 0.687)40.299.797.10.5760.894
GTPTargetS 157.398.896.90.6170.855
NsitePred 158.495.794.00.448-
SVMPred 148.391.789.70.2760.821
alignment-based 152.897.995.90.516-
EC-RUS 261.898.797.00.6410.861
SXGBsite (T = 0.500)49.499.697.30.6420.870
SXGBsite (T = 0.421)50.699.697.40.6500.870
1 Results excerpted from Yu et al. [7]. 2 Results excerpted from Ding et al. [8]. - denotes unavailable.
Table 6. SXGBsite (average of 10 replicate experiments) compared with the existing methods on the five metal ion independent test sets.
Table 6. SXGBsite (average of 10 replicate experiments) compared with the existing methods on the five metal ion independent test sets.
LigandPredictorSN (%)SP (%)ACC (%)MCCAUC
Ca 2 + TargetS 113.899.898.80.2430.767
FunFOLD 112.299.698.10.196-
CHED 118.798.297.10.142-
alignment-based 120.398.697.50.175-
EC-RUS 217.399.698.70.2250.779
SXGBsite (T = 0.500)32.695.694.90.1390.758
SXGBsite (T = 0.832)13.399.798.70.1970.758
Mg 2 + TargetS 118.399.898.80.2940.706
FunFOLD 122.099.198.30.215-
CHED 114.698.397.30.103-
alignment-based 114.199.298.20.147-
EC-RUS 220.199.899.10.3170.780
SXGBsite (T = 0.500)41.096.395.80.1770.779
SXGBsite (T = 0.917)19.899.899.10.2910.779
Mn 2 + TargetS 140.199.598.70.4490.888
FunFOLD 123.399.898.70.376-
CHED 135.098.197.30.253-
alignment-based 126.699.098.00.257-
EC-RUS 235.899.698.90.4030.888
SXGBsite (T = 0.500)44.398.397.70.2990.856
SXGBsite (T = 0.797)34.299.598.80.3820.856
Fe 3 + TargetS 148.399.398.70.4790.945
FunFOLD 147.299.198.40.432-
CHED 149.297.096.30.279-
alignment-based 130.099.298.30.300-
EC-RUS 244.399.699.00.4900.936
SXGBsite (T = 0.500)42.599.098.30.3610.891
SXGBsite (T = 0.670)38.799.498.70.3960.891
Zn 2 + TargetS 146.499.598.70.5270.936
FunFOLD 136.599.598.60.436-
CHED 137.998.097.10.280-
alignment-based 129.799.098.00.297-
EC-RUS 248.999.298.60.4370.958
SXGBsite (T = 0.500)62.496.796.30.3230.906
SXGBsite (T = 0.833)41.099.298.60.3900.906
1 Results excerpted from Yu et al. [7]. 2 Results excerpted from Ding et al. [8]. - denotes unavailable.
Table 7. SXGBsite (average of 10 replicate experiments) compared with the existing methods on the DNA independent test set.
Table 7. SXGBsite (average of 10 replicate experiments) compared with the existing methods on the DNA independent test set.
LigandPredictorSN (%)SP (%)ACC (%)MCCAUC
DNATargetS 141.396.593.30.3770.836
MetaDBSite 158.076.475.20.192-
DNABR 140.787.384.60.185-
alignment-based 126.694.390.50.190-
EC-RUS 231.597.895.20.3190.814
SXGBsite (T = 0.500)36.595.192.80.2560.826
SXGBsite (T = 0.408)46.292.891.00.2690.826
1 Results excerpted from Yu et al. [7]. 2 Results excerpted from Ding et al. [8]. - denotes unavailable.
Table 8. SXGBsite (average of 10 replicate experiments) compared with the existing methods on the HEME independent test set.
Table 8. SXGBsite (average of 10 replicate experiments) compared with the existing methods on the HEME independent test set.
LigandPredictorSN (%)SP (%)ACC (%)MCCAUC
HEMETargetS (T = 0.650) 149.899.095.90.5980.907
TargetS(T = 0.180) 169.390.489.10.4260.907
HemeBind 186.290.790.60.537-
alignment-based 151.497.394.40.507-
EC-RUS (T = 0.500) 283.587.587.30.4530.935
EC-RUS (T = 0.859) 255.899.096.40.6400.935
SXGBsite (T = 0.500)61.697.795.50.6000.933
SXGBsite (T = 0.700)52.199.096.20.6180.933
1 Results excerpted from Yu et al. [7]. 2 Results excerpted from Ding et al. [8]. - denotes unavailable.
Table 9. Comparison of running time between SXGBsite and EC-RUS (SVM and WSRC) (seconds).
Table 9. Comparison of running time between SXGBsite and EC-RUS (SVM and WSRC) (seconds).
DatasetSXGBsite 1EC-RUS
(SVM) 2
EC-RUS
(WSRC) 2
DatasetSXGBsite 1EC-RUS
(SVM) 2
EC-RUS
(WSRC) 2
ATP134.51746.37018.4 C a 2 + 273.66366.525627.2
ADP146.24602.810940.5 M g 2 + 290.96558.631094.1
AMP118.5647.52298.1 M n 2 + 124.9439.52806.8
GDP90.4284.6685.8 F e 3 + 110.6173.31065.9
GTP92.6115.8334.6 Z n 2 + 215.94284.620220.6
DNA131.44508.59083.6HEME104.63459.92940.5
1 The PSSM-DCT + PSA feature of SXGBsite is 183-D. 2 The PSSM-DCT + PSA feature of EC-RUS (SVM) is 143-D. SVM, support vector machine; WSRC, weighted sparse representation based classifier.
Table 10. SXGBsite (average of 10 replicate experiments) compared with the existing methods on the PDNA-41 independent test set.
Table 10. SXGBsite (average of 10 replicate experiments) compared with the existing methods on the PDNA-41 independent test set.
PredictorSN (%)SP (%)ACC (%)MCCAUC
BindN 145.6480.9079.150.143-
ProteDNA 14.7799.8495.110.160-
BindN + (FPR 5%) 124.1195.1191.580.178-
BindN + (Spec 85%) 150.8185.4183.690.213-
MetaDBSite 134.2093.3590.410.221-
DP-Bind 161.7282.4381.400.241-
DNABind 170.1680.2879.780.264-
TargetDNA (Sen Spec) 160.2285.7984.520.269-
TargetDNA (FPR 5%) 145.5093.2790.890.300-
EC-RUS (DNA) (Sen Spec) 261.0477.2576.440.193-
EC-RUS (DNA) (FPR 5%) 227.2597.3194.580.315-
SXGBsite (Sen Spec)60.3585.9484.670.2720.825
SXGBsite (FPR 5%)35.0195.0192.030.2650.825
1 Results excerpted from Hu et al. [61]. 2 Results excerpted from Shen et al. [44]. - denotes unavailable.

Share and Cite

MDPI and ACS Style

Zhao, Z.; Xu, Y.; Zhao, Y. SXGBsite: Prediction of Protein–Ligand Binding Sites Using Sequence Information and Extreme Gradient Boosting. Genes 2019, 10, 965. https://doi.org/10.3390/genes10120965

AMA Style

Zhao Z, Xu Y, Zhao Y. SXGBsite: Prediction of Protein–Ligand Binding Sites Using Sequence Information and Extreme Gradient Boosting. Genes. 2019; 10(12):965. https://doi.org/10.3390/genes10120965

Chicago/Turabian Style

Zhao, Ziqi, Yonghong Xu, and Yong Zhao. 2019. "SXGBsite: Prediction of Protein–Ligand Binding Sites Using Sequence Information and Extreme Gradient Boosting" Genes 10, no. 12: 965. https://doi.org/10.3390/genes10120965

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop