Next Article in Journal
State of Charge Estimation of a Lithium Ion Battery Based on Adaptive Kalman Filter Method for an Equivalent Circuit Model
Next Article in Special Issue
Toward Automatic Cardiomyocyte Clustering and Counting through Hesitant Fuzzy Sets
Previous Article in Journal
VPNFilter Malware Analysis on Cyber Threat in Smart Home Network
Previous Article in Special Issue
Fuzzy Logic Controller Parameter Optimization Using Metaheuristic Cuckoo Search Algorithm for a Magnetic Levitation System
Article Menu
Issue 13 (July-1) cover image

Export Article

Appl. Sci. 2019, 9(13), 2764; https://doi.org/10.3390/app9132764

Article
Performance Analysis of Feature Selection Methods in Software Defect Prediction: A Search Method Approach
1
Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Perak 32610, Malaysia
2
Department of Computer Science, University of Ilorin, Ilorin 240103, Nigeria
*
Author to whom correspondence should be addressed.
Received: 26 April 2019 / Accepted: 14 May 2019 / Published: 9 July 2019

Abstract

:
Software Defect Prediction (SDP) models are built using software metrics derived from software systems. The quality of SDP models depends largely on the quality of software metrics (dataset) used to build the SDP models. High dimensionality is one of the data quality problems that affect the performance of SDP models. Feature selection (FS) is a proven method for addressing the dimensionality problem. However, the choice of FS method for SDP is still a problem, as most of the empirical studies on FS methods for SDP produce contradictory and inconsistent quality outcomes. Those FS methods behave differently due to different underlining computational characteristics. This could be due to the choices of search methods used in FS because the impact of FS depends on the choice of search method. It is hence imperative to comparatively analyze the FS methods performance based on different search methods in SDP. In this paper, four filter feature ranking (FFR) and fourteen filter feature subset selection (FSS) methods were evaluated using four different classifiers over five software defect datasets obtained from the National Aeronautics and Space Administration (NASA) repository. The experimental analysis showed that the application of FS improves the predictive performance of classifiers and the performance of FS methods can vary across datasets and classifiers. In the FFR methods, Information Gain demonstrated the greatest improvements in the performance of the prediction models. In FSS methods, Consistency Feature Subset Selection based on Best First Search had the best influence on the prediction models. However, prediction models based on FFR proved to be more stable than those based on FSS methods. Hence, we conclude that FS methods improve the performance of SDP models, and that there is no single best FS method, as their performance varied according to datasets and the choice of the prediction model. However, we recommend the use of FFR methods as the prediction models based on FFR are more stable in terms of predictive performance.
Keywords:
software defect prediction; feature selection; high dimensionality; search methods

1. Introduction

Software Defect Prediction (SDP) models are built using software metrics which based on data collected from the previous developed system or similar software projects [1]. Using such a model, the defect-proneness of the software modules under development can be predicted. The goal of SDP is to achieve high software quality and reliability with the effective use of available limited resources. In other words, SDP involves identifying software modules or components that are prone to defects. This will avail software engineers to prioritize the utilization of inhibited resources during each phase of the software development [2,3]. Consequently, reliability and quality for the software assessment, in addition to software quality assurance, are guaranteed [4,5]. Software metrics which includes software source code complexity and development history are typically used to analyze the efficiency of the software process, quality, and reliability of the software products. In addition, software engineers use these software metrics for risk assessment and they are used for defect prediction to identify and improve the quality of software products [6,7,8]. Specifically, McCabe and Halstead Metrics, Procedural Metrics, Process Metrics, etc. are types of engineered software metrics used to determine the quality and reliability level of a software system [6,9]. A software module or component unit contains a set of features (metrics) and a class label. The class label depicts the state of the software module, either as defective or non-defective, while the derived features are used to develop SDP models [10,11]. That is, SDP uses historical data extracted from software repositories to determine the quality and reliability of the software modules or components [12,13].
SDP can be regarded as a classification task that involves categorizing software modules either as defective or non-defective, based on historical data and software metrics or features [14,15,16]. Software features or metrics reflect the characteristics of software modules. However, the numbers of metrics generated are usually high. This is due to various different types of software metric mechanisms used to determine the quality and reliability of a software system. The proliferation of these mechanisms consequently generates a large number of metric values, leading to a high-dimensionality problem when many feature values are generated. In addition, some of these features (metrics) may be more relevant to the class (defective or non-defective) than others, and some may be redundant or irrelevant [17,18].
Feature selection (FS) can be used to select high uncorrelated features from the high dimensional features. In other words, it can select those features that are more relevant and irredundant to the class label of the dataset among the features. Therefore, introducing FS methods into SDP can solve the high dimensionality problem [17,18,19]. FS is a vital data pre-processing step in classification processes as it improves the quality of data and consequently improves the predictive performance of the prediction models. Existing research has shown that irrelevant features, along with redundant features, can severely affect the accuracy of the defect predictors [20,21,22,23]. Thus, there is a need for finding an efficient feature selection method which can identify and remove as much irrelevant and redundant information as possible, thereby leading to good predictive performance with low computational cost [24,25]. Supervised feature selection techniques evaluate the available feature’s characteristics and derive a set of pertinent characteristics based on labeled datasets. The criteria used to determine the useful feature characteristics depend on the underlining computational characteristics of the technique utilized. Filter feature-ranking (FFR) methods which are types of supervised FS methods depend on the computational characteristics of each feature by certain critical factors, and then the analyst culls some features that are congruous with a particular dataset. On the other hand, filter feature subset selection (FSS) methods (another type of supervised FS methods) search for a subset of features that have good predictive capability collectively. In this study, a comparative performance analysis of FFR and FSS methods based on different search methods are investigated to determine their respective efficacy in culling a germane set of features.
Recent studies have compared the impact of FS methods on the performance of SDP [26,27,28,29,30,31,32]. Some studies conclude that some of the FS methods are better than others [27,28,30,31], while some studies claimed that there is no significant difference between the performances of FS methods in SDP [26,29,32]. This contradiction and inconsistency in results by existing studies may be due to the choice of search mechanism used in FS methods.
In this study, four different FFR methods (Information Gain (IG), ReliefF (RFA), Gain Ratio (GR), and Clustering Variation (CV)) based on Ranker Search method and two FFS methods: Correlation-based Feature Subset Selection (CFS) and Consistency Feature Subset Selection (CNS) based on two different search approaches: Exhaustive Search (Best First Search (BFS) and Greedy Stepwise Search (GSS) methods) and Heuristic Search (Genetic Search (GS), Bat Search (BAT), Ant Search (AS), Fire-Fly Search (FS), and Particle Swarm Optimization (PSO) method). Four classification techniques: Naïve Bayes (NB), Decision Tree (DT), Logistic Regression (LR), and K-Nearest Neighbor (KNN) were used to evaluate the effectiveness of these FS methods. The respective models were used on five software defects dataset from the NASA repository and their predictive performances were measured comparatively based on accuracy.
From our experimental results, the application of FS improves the predictive performance of the prediction models and the performance of FS methods varies across datasets and prediction models. IG recorded the best improvement on prediction models over other FFR methods, while CNS based on BFS had the best influence on prediction models based on FSS methods. Further analysis showed that prediction models based on FFR are more stable in terms of performance accuracy than other FS methods.
The rest of this paper is structured as follows. Section 2 presents a literature review and analysis of existing related works. Section 3 presents the various FS methods including the search methods, classifiers, datasets, and the performance metric considered in the experimental works of this study. Section 4 highlights the experimental procedure, experimental results, and discussion of our findings for the experimental works. Section 5 presents the threats to the validity of this study. Section 6 concludes the comparative study and summarizes future work.

2. Related Works

A major problem associated with SDP is the dilemma of having a large number of metrics (features). In other words, using all software metrics in training an SDP model can end up negatively affecting the predictive performance of the model. As such, many FS approaches have been proposed in addressing the selection of optimal software metrics. Some studies went to the extent on comparing these methods in order to identify the best method. However, most of these studies yielded contradictory and inconsistent conclusions on the effect of FS methods in SDP [26,29,32].
Ghotra et al. [28] performed a large scale impact analysis of twenty-eight FS methods on twenty-one commonly used classifiers. Their experiment was based on software defect datasets from the NASA and the PROMISE repositories. They concluded that correlation-based filter-FS method based on the BF search method outperforms other FS methods across the datasets. This is a good indicator that their experiment covered a large number of FS methods, classification techniques, and datasets. However, they only considered BF and GA search methods as search mechanisms for the FSS methods. There are other heuristics and meta-heuristic search methods such as BAT, AS, FS, etc. that may perform better than BF and GS in this context.
Afzal and Torkar [26] conducted a benchmark study by empirically comparing state-of-the-art FS methods. They have considered, IG, RF, Principal Component Analysis (PCA), CFS, CNS and wrapper subset evaluation (WRP). NB and DT were deployed on Five software defect datasets and the predictive models were evaluated based on Area Under Curve (AUC). Their results showed that FS is beneficial to SDP but there was no individual best FS method for SDP. This could be partly due to the number and type of software defect datasets been considered and the choice of search mechanisms in the case of the FSS and WRP FS methods.
Gao et al. [27], regarded feature selection as a search problem in software engineering. Their study was concerned with software quality estimation and they proposed a hybrid FS approach which was based on Kolmogorov–Smirnov statistic and automatic hybrid search (AHS). Their results showed that AHS was superior to other methods, and that an elimination of 85% of software metrics may affect performances of SDP models positively or remain constant.
Akintola et al. [18] performed a comparative analysis of classifiers based on FFS on SDP and their results gave credit to the usage of FFS, but there can still be further analysis using other FS methods. It has been proven empirically that wrappers obtain subsets with better performance than filter feature selection because the subsets were evaluated using a real modeling algorithm [33,34]. Rodriguez et al. [35] have also conducted comparative experiments on FS methods based on three different FFR and WRP models on four software defect datasets. Their results showed that smaller data sets generally maintain predictability with fewer features than the original data sets.
From the aforementioned studies, only a handful of comparative performance studies have been carried out to evaluate the efficacy of FS methods based on different search mechanisms. Therefore, there is a need to have a vital comparative evaluation of FS methods based on different search mechanisms in SDP. This is to create a better understanding of FS methods characteristics and guide researchers and analysts on the selection of search methods in FS based for SDP. In this paper, a comparative performance analysis of eighteen FS methods in SDP is presented. Each FS methods was used with four different classifiers selected based on performance and heterogeneity. The respective SDP models were tested with five software defect datasets from NASA repository and evaluated based on prediction accuracy. The performance stability of each prediction model based on FS methods was further evaluated via the coefficient of variation for each prediction models.

3. Methodology

This section describes the FS methods, the respective search methods, classification algorithms, experimental setup, software defect dataset and performance metrics used in this study.

3.1. Filter Feature Ranking Method

Filter Feature Ranking (FFR) method uses the computational characteristics of datasets to independently assess and rank attributes in datasets which are found to be independent of the prediction model. It grades each attributes base on different characteristics such as statistics, probability, instance or classifier based indicators. Attributes are thereafter selected based on their score [29]. In this paper, four FFR methods were considered based on different functional characteristics. For the selection of top-ranked features, log 2 N was used in this study, where N is the number of software metrics in the full software defect dataset. Table 1 presents a description of the FFR methods used in this study.

3.2. Filter Feature Subset Selection Method

Feature Subset Selection (FSS), just like the FFR, assesses, ranks and select features based on some properties. However, in the case of FSS, the major focus is the search method which is used to generate a subset of features that collectively have good prediction potentials. It considers the existence of better predictive performance when a feature is combined with other features other [32]. Two FSS methods were considered in this study. As mentioned earlier, FSS is based on various search methods. Therefore, these search methods traverse the feature space to generate a subset with high predictive potentials. Consequently, the performance of FSS varies per the search methods [29]. Table 2 presents a detailed description of the FSS methods used in this study, and Table 3 shows the various search methods with their parameters used in the FSS methods.

3.3. Classification Algorithms

For the classification process, four widely used classification algorithms were considered for evaluating the efficacy of the FS methods. These are Naïve Bayes (NB), Decision Tree (DT), Kernel Logistic Regression (LR), and K Nearest Neighbor (KNN). This is in line with the aim of this study to evaluate the efficacy of FFR and FSS methods, which should be independent of the classification technique. The four classifiers were selected based on different characteristics: NB based on Bayes’ theorem, DT from Tree-based methods, LR from the function-based classification technique and KNN from the instance based learning classifiers. The heterogeneity of the selected classifiers is to investigate how different FS methods perform on different classifiers with different characteristics. Table 4 gives a brief description of these classification algorithms with respect to their classification characteristics and their parameter settings.

3.4. Experimental Setup

This section discusses the experimental setup of the comparative performance analysis of FS methods as depicted in Figure 1. The experimental setup can be described based on three major steps.
Step 1: At first, four FFR methods as presented in Table 1 are applied to the original full software defect datasets. In this case, each of the FFR methods (IG, RFA, GR, and CV based on Ranker Search method) assesses and ranks the software features based on their respective characteristics. log 2 N features were selected from the ranked list provided by each of the FFR methods. Specifically, the first six features were selected from the resulting rank list of the FFR methods. The log 2 N (where N is number of features in each software defect datasets) was adopted according to the work of Gao et al. [27] which indicated that it is better to cull log 2 N features; many empirical studies have followed this procedure [26,28,29,32]. According to Table 4, the value for N in this study is (N >= 21) and this is due to the choice of different software defect datasets for the experiment. In the end, reduced datasets in terms of attributes are generated.
Step 2: Secondly, the two FSS methods in Table 2 and the respective search methods in Table 3 are applied to the original full datasets. The two FSS (CFS, CNS) methods with seven search methods (BFS, BAT, FS, AS, GSS, PSOS, and GS) making fourteen FSS methods to apply on the dataset. Each of the FSS methods generates a subset of features that have high predictive potentials. The respective parameter settings of the search methods as depicted in Table 3 were used. As in the case of FFR methods, reduced datasets are also generated at the end of this step.
Step 3: This step involves the prediction process. That is the application of classifiers in Table 4 on the software defect dataset (Filtered datasets from Step 1 and 2). Four classifiers (NB, DT, LR, and KNN) were applied on both the reduced (Filtered datasets from Step 1 and 2) and full datasets. The essence of this step is to show the efficacy of reduced software features in SDP. Each of the experiments was carried out using the 10-folds cross-validation (CV) method. This is to avoid the issue of biases and overfitting of the prediction models and also to reduce the effect of class imbalance which is one of the data quality problems in data mining [29,37]. In addition, due to the random nature of the search methods of the FSS methods, each experiment involving FSS methods were performed 10 times to validate the result. Eventually, a total of 2900 ((4 FFR methods × 5 datasets × 4 classifiers) + (14 FSS methods × 5 datasets × 4 classifiers × 10 runs) + (4 classifiers × 5 datasets)) distinct experiments were carried out in this study.

3.5. Software Defect Datasets

The datasets used in this study are obtained from the National Aeronautics Space Administration (NASA) Facility Metrics Data Program (MDP) repository [38]. Recently, studies have shown that these datasets are noisy and need to be cleaned and pre-processed [26,28,29,32]. The cleaned version of the NASA datasets from Shepperd et al. [39] was used in this study. Table 5 presents the description of the clean NASA datasets with their number of features and modules.

3.6. Performance Evaluation Metrics

In this study, the performance evaluation method is based on accuracy which measures the percentage of the correctly classified instances. The metric values were computed using the statistical values of True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN).
Accuracy = TP + TN TP + FP + FN + TN × 100 %
To determine the performance stability of prediction models, Co-efficient of Variation (C.V) was applied to the results of the prediction models. C.V which is the percentage ratio of standard deviation (SD) and average (AVE) is used to remove the effect of average difference on the comparison stability [15,40]. The formula for C.V is given as thus:
C . V = SD AVE × 100 %
Prediction models with high C.V values are regarded as unstable.

4. Experimental Results

This section presents the experimental results based on the experimental procedure (See Figure 1). The performances of each prediction models were analyzed based on accuracy and the results were compared on two cases (with and without application of FS methods). The parameter settings for each classifier and search method are as shown in Table 3 and Table 4. All prediction models were built based on the WEKA machine learning tool [41]. For replication of this study, all models and datasets used in this study are readily available in [42].
As mentioned previously, four FFR and fourteen FSS methods were applied one after the other to five software defect datasets. Four classifiers based on the selected and full features of the datasets were used to develop the SDP models. Table 6, Table 7, Table 8 and Table 9 present the accuracy values of the four classifiers on the software defect datasets. These accuracy values were from two scenarios (With FS methods and Without FS methods). Specifically, in each of the tables, the accuracy performance value of each classifier on the individual datasets (full and reduced) is shown. The search methods for the FSS (CFS and CNS) were further grouped into two (i.e. Heuristic method and Exhaustive method). This is to show the distinction of how the respective search methods behaved when used as subset evaluation methods in FSS. The average performance of these classifiers (NB, DT, LR, and KNN) across all datasets were computed and the variation of the average performance of each prediction models with FS (FFR and FSS) methods to the average performance of each prediction model without FS methods were also computed. This is to show how significant the effectiveness of the application of FS methods in SDP. As shown in Table 6, Table 7, Table 8 and Table 9, it is observed that the accuracy performance of the prediction models based on FS methods, in this case, FFR and FSS were better than when no FS methods are applied. This further strengthens the evidence that FS methods can improve the performance of prediction models in SDP.
Specifically, considering the average accuracy performance value of prediction models based on NB classifier as shown in Table 6, NB with CFS using BAT heuristic search method had the highest average accuracy value, i.e., 84.65%. This accuracy value is better than when no FS methods are used with the NB model (81.14%) by 4.33%. The same goes to prediction models based on DT classifier, as shown in Table 7. DT with CFS using BAT heuristic search method had the best average accuracy value of 86.66%. Compared with when no FS methods on DT (84.50%), a variation of 2.33% increment was observed. In the case of LR classifier, as presented in Table 8, LR with CNS based on GS had the highest average accuracy value of 86.98% with a positive variation of 1.22% when compared with when no FS methods are used on LR (85.93%). From Table 9, KNN with CNS based on BFS had the highest average accuracy value of 83.70% with a positive variation of 3.92% when compared with no FS methods. It was also observed that LR with CNS based on GS had the highest average accuracy value (85.93%) across all prediction models and NB with CFS based on BAT heuristic search had the highest positive variation (4.33%). This clearly shows that FS methods have a positive effect on the prediction models as the average accuracy values based on each classifier without FS methods are less than when FS methods are applied. Our findings on the positive effect of FS methods on prediction models are in accordance with research outcomes from existing empirical studies. Ghotra et al. [28], Afzal and Torkar [26], and Akintola et al. [18] in their respective studies also reported that FS methods had a positive effect on prediction models in SDP. However, our study explored the effect of FS methods on prediction models based on the search method which is different from existing empirical studies.
Furthermore, assessing the accuracy performance of each prediction model on each of the dataset will showcase how these prediction models perform based on different FS methods. Table 10, Table 11, Table 12, Table 13 and Table 14 present the comparisons of FS (FFR and FSS) methods on each of the five datasets respectively. Considering the number of features generated by FS methods, FFR features are pre-calculated based on log 2 N (where N is the number of features). The number of features for FSS methods are based on the search methods used (Heuristic or Exhaustive). Across all datasets and FS methods used in this study, the number of features generated by CFS is less than CNS. On the CM1 dataset, with respect to CFS, LR with PSO search method and DT with BAT search method had an accuracy value of 87.16% with (LR+CFS+PSO) selecting eight features and the (DT+CFS+BAT) had five features. Same also was observed for CNS, LR with GA search, DT with GSS search, and LR with BF search had accuracy value of 87.16%. LR with GA search had more features (twelve) and LR with BF selected just one feature. However, based on the FFR method, DT with CV based on Ranker search had the highest accuracy value of 87.46% which means the FFR method gave the best performance on CM1 dataset as presented in Table 10. From Table 11, LR with CNS based on AS (77.11%) out-performs all prediction models on the KC1 dataset. The prediction model was built on seventeen features as selected by AS. Other FS methods on KC1 selected smaller features but their respective prediction model had lower accuracy performance. In addition, on dataset KC3 as presented in Table 12, DT with CFS based on BAT and GA search had the highest accuracy value of 82.99% with two features selected while LR with CFS based on BAT search in Table 13 had the best accuracy value on MW1 dataset. In Table 14, DT and LR with CFS based on BAT and AS respectively had an accuracy value of 97.78% on PC2 dataset. The FFR methods also had similar accuracy value on PC2 based on DT with RFA, GR, and CV. Clearly, there was no significant difference in the performance of the FS methods, as their respective performance and effect varies from dataset to dataset and the choice of classification algorithm. This research outcome is related to the findings from Xu et al. [29], Kondo et al. [31] and Muthukumaran et al. [30]. Although on average, FSS methods proved to be better than FFR methods.
From the aforementioned results, it is clear that there is no significant difference in the performance accuracy values of the FS methods. That is, the performance of FS methods depends largely on the dataset as the best subset of features that varied from one dataset to another. In addition, it was also observed that based on the FSS (CFS and CNS) methods, a varying number of features were selected. This presents a very interesting case on how the small number of features affects the performance of prediction models. Some studies argued that the lesser the features, the better the performance [26,37,43]. However, in this study, it was observed that the number of features selected depends largely on the FS method used. CNS often selects more features and the prediction models based on CNS outperforms other FS methods. In addition, as presented in Table 15, considering the FFR methods, IG had the best influence on the prediction models over other FFR methods. While considering the FSS, CNS based on BFS had the best influence on the prediction models. However, CFS based on BS had the best improvement on the performance of NB and DT and CNS based on GS and BF improved the performance of LR and KNN best, respectively.
Furthermore, we conducted a stability test on the FS methods based on different prediction models using the average accuracy values from the experimental results. We calculated the Standard Deviation (SD) and the Co-efficient of Variation (CV) as presented in Appendix A Table A1. FFR methods produced more stable results in terms of accuracy across the prediction models as compared with the FSS methods. Consequently, even if there is no significant difference for the prediction models based on a variety of FS methods considered in this study, FFR proves to be more stable than FSS methods having lower CV values. The best prediction model developed using each classifier is illustrated in Figure 2. In Table 15, prediction models with FS methods outperform models developed without features selection. This indicates the importance of FS methods while developing the SDP model regardless of which family (characteristics) the classification algorithm belongs. Figure 3 shows the positive gain, that is, the variation of the prediction models with FS methods to prediction models without FS methods. Figure 4 and Figure 5 show the performance stability of prediction models developed with FS methods respectively. Both figures depict the standard deviation (SD) and Co-efficient of Variation (CV) values for each FS method. Lastly, Figure 6 pictorially presents the performance FS method based on average accuracy values on each dataset.
In conclusion, listed below are a summary of our findings from this comparative study:
  • FS methods are very important and useful as it improves the performance of prediction models.
  • Based on the individual performance accuracy values, FS methods had the highest improvement on the predictive performance of NB classifier.
  • CFS based on (AS, BAT, GS, FS, PSO, BFS, and GSS) selects (automatically) the minimum number of features.
  • On the average as presented in Table 15, CFS based on BAT had the highest positive variation on NB and DT while CNS based on GS and BFS had the highest positive variation value on LR and KNN.
  • FFR had the lowest C.V. values which make it more stable than other FSS methods (See Table A1 in Appendix A).

5. Threat to Validity

This section discusses the threats to the validity of our comparative study. According to Wohlin et al. [44], empirical software engineering is becoming relevant and a vital factor of any empirical study is to analyze and mitigate threats to the validity of the experimental results.
External validity: This validity mainly bothers on the ability to generalize the experimental study. Five software defects datasets which have been extensively utilized in defect prediction were used in this study. Although these datasets differs in their characteristics (number of instances and attributes) and are from the commonly used corpora (NASA), we cannot generalize conclusions of this study on other software defect datasets. However, this study provided a comprehensive experimental setup, with applicable parameter tuning and settings, which makes it possible for researchers to replicate on other software defect datasets.
Internal validity: This validity stresses on the choice of prediction models and feature selection methods. Gao et al. [45] stated that factors such as choice of software applications, classification algorithm selection, and noisy datasets affect the internal validity of SDP. In this study, we selected 4 classificationn algorithms based on performance and heterogeneity (See: Table 4) and these classification algorithms are well used in SDP. Specifically, 18 methods based on two FS techniques with seven search methods were used in this study. Nonetheless, future studies can consider other FS techniques and new search methods.
Construct validity: This validity focuses on the choice of performance metrics used to evaluate the performance of prediction models. In this study, accuracy which measures the percentage of the correctly classified instances was employed and Co-efficient of Variation (C.V) was applied to the results of the prediction models to determine the performance stability of prediction models. However, other performance metrics such as Area under Curve (AUC) and F-Measure may also be applicable.

6. Conclusions and Future Work

SDP can assist software engineers in identifying defect-prone modules in a software system and consequently streamline the deployment of limited resources in Software Development Life Cycle (SDLC) during software development. However, the performance of SDP depends on the quality of software defect datasets which suffers from high dimensionality. Hence, the selection of relevant and irredundant features from software defect datasets is imperative to achieve a strong prediction model in SDP. This study conducted a comparative performance analysis via the investigation of eighteen FS methods on five software defect datasets from NASA repository with four classification algorithms. The FS methods were grouped into two Filter subset selections (FSS) (CFS and CNS) with seven different search methods (BFS, BAT, FS, AS, GSS, PSOS, and GS) and four Feature Filter Rank (FFR) (IG, RFA, GR, and CV) methods based on ranker search method. From the experimental results, IG recorded the best improvement on the prediction models over other FFR methods while CNS based on BFS had the best influence on the prediction models based on FSS methods. In addition, further analysis showed that prediction models based on FFR are more stable than other FS methods. It was conclusively discovered that the performance of FS methods varied across the dataset and that some classifiers behaved differently. This may be due to the class imbalance which is a primary data quality problem in data science. In the future, we intend to look at how other data quality problems, such as class imbalance and outliers, affect FS methods in SDP.

Author Contributions

Conceptualization, A.O.B.; Investigation, A.O.B. and A.S.H.; Supervision, S.B.; Validation, S.J.A.; Writing—original draft, A.O.B.; Writing—review & editing, S.B., S.J.A. and A.S.H.

Funding

This research received no external funding.

Acknowledgments

This research was partly supported by Ministry of Higher Education Malaysia, under the Fundamental Research Grant Scheme (FRGS) with Ref. No. FRGS/1/2018/ICT04/UTP/02/04.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Performance Stability of FS methods on Prediction Models based on Average Accuracy Values.
Table A1. Performance Stability of FS methods on Prediction Models based on Average Accuracy Values.
ClassifiersMetricsCFSCNSFFR
NBSD0.610.620.42
CV0.730.740.50
DTSD0.370.460.42
CV0.430.540.49
LRSD0.220.330.09
CV0.250.380.10
KNNSD0.420.780.21
CV0.510.950.26
Boldface typeface indicates the lowest value for each FS method.

References

  1. Fenton, N.; Bieman, J. Software Metrics: A Rigorous and Practical Approach; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  2. Ali, M.M.; Huda, S.; Abawajy, J.; Alyahya, S.; Al-Dossari, H.; Yearwood, J. A parallel framework for software defect detection and metric selection on cloud computing. Clust. Comput. 2017, 20, 2267–2281. [Google Scholar] [CrossRef]
  3. Yadav, H.B.; Yadav, D.K. A fuzzy logic based approach for phase-wise software defects prediction using software metrics. Inf. Softw. Technol. 2015, 63, 44–57. [Google Scholar] [CrossRef]
  4. Huda, S.; Alyahya, S.; Ali, M.M.; Ahmad, S.; Abawajy, J.; Al-Dossari, H.; Yearwood, J. A Framework for Software Defect Prediction and Metric Selection. IEEE Access 2018, 6, 2844–2858. [Google Scholar] [CrossRef]
  5. Li, Z.; Jing, X.-Y.; Zhu, X. Progress on approaches to software defect prediction. IET Softw. 2018, 12, 161–175. [Google Scholar] [CrossRef]
  6. Tan, M.; Tan, L.; Dara, S.; Mayeux, C. Online Defect Prediction for Imbalanced Data. In Proceedings of the 37th International Conference on Software Engineering-Volume 2, Florence, Italy, 16–24 May 2015; IEEE Press: Piscataway, NJ, USA, 2015; pp. 99–108. [Google Scholar]
  7. Tantithamthavorn, C.; McIntosh, S.; Hassan, A.E.; Matsumoto, K. An empirical comparison of model validation techniques for defect prediction models. IEEE Trans. Softw. Eng. 2017, 43, 1–18. [Google Scholar] [CrossRef]
  8. Jing, X.-Y.; Wu, F.; Dong, X.; Xu, B. An improved SDA based defect prediction framework for both within-project and cross-project class-imbalance problems. IEEE Trans. Softw. Eng. 2017, 43, 321–339. [Google Scholar] [CrossRef]
  9. Tong, H.; Liu, B.; Wang, S. Software defect prediction using stacked denoising autoencoders and two-stage ensemble learning. Inf. Softw. Technol. 2018, 96, 94–111. [Google Scholar] [CrossRef]
  10. Arar, Ö.F.; Ayan, K. Software defect prediction using cost-sensitive neural network. Appl. Soft Comput. 2015, 33, 263–277. [Google Scholar] [CrossRef]
  11. Zhang, F.; Zheng, Q.; Zou, Y.; Hassan, A.E. Cross-project defect prediction using a connectivity-based unsupervised classifier. In Proceedings of the 38th International Conference on Software Engineering, Austin, TX, USA, 14–22 May 2016; pp. 309–320. [Google Scholar]
  12. Herbold, S.; Trautsch, A.; Grabowski, J. A comparative study to benchmark cross-project defect prediction approaches. IEEE Trans. Softw. Eng. 2018, 44, 811–833. [Google Scholar] [CrossRef]
  13. Kamei, Y.; Fukushima, T.; McIntosh, S.; Yamashita, K.; Ubayashi, N.; Hassan, A.E. Studying just-in-time defect prediction using cross-project models. Empir. Softw. Eng. 2016, 21, 2072–2106. [Google Scholar] [CrossRef]
  14. Grbac, T.G.; Mausa, G.; Basic, B.D. Stability of Software Defect Prediction in Relation to Levels of Data Imbalance. In Proceedings of the 2nd Workshop of Software Quality Analysis, Monitoring, Improvement, and Applications (SQAMIA), Novi Sad, Serbia, 15–17 September 2013; pp. 1–10. [Google Scholar]
  15. Yu, Q.; Jiang, S.; Zhang, Y. The performance stability of defect prediction models with class imbalance: An empirical study. IEICE Trans. Inf. Syst. 2017, 100, 265–272. [Google Scholar] [CrossRef]
  16. Balogun, A.O.; Bajeh, A.O.; Orie, V.A.; Yusuf-Asaju, A.W. Software Defect Prediction Using Ensemble Learning: An ANP Based Evaluation Method. FUOYE J. Eng. Technol. 2018, 3, 50–55. [Google Scholar]
  17. Jimoh, R.; Balogun, A.; Bajeh, A.; Ajayi, S. A PROMETHEE based evaluation of software defect predictors. J. Comput. Sci. Its Appl. 2018, 25, 106–119. [Google Scholar]
  18. Akintola, A.G.; Balogun, A.O.; Lafenwa-Balogun, F.B.; Mojeed, H.A. Comparative Analysis of Selected Heterogeneous Classifiers for Software Defects Prediction Using Filter-Based Feature Selection Methods. FUOYE J. Eng. Technol. 2018, 3, 134–137. [Google Scholar]
  19. Agarwal, S.; Tomar, D. Prediction of Software Defects Using Twin Support Vector Machine. In Proceedings of the 2014 International Conference on Information Systems and Computer Networks (ISCON), Mathura, India, 1–2 March 2014; IEEE: Piscataway, NJ, USA; pp. 128–132. [Google Scholar]
  20. Chutia, D.; Bhattacharyya, D.K.; Sarma, J.; Raju, P.N.L. An effective ensemble classification framework using random forests and a correlation based feature selection technique. Trans. GIS 2017, 21, 1165–1178. [Google Scholar] [CrossRef]
  21. Khalid, S.; Khalil, T.; Nasreen, S. A Survey of Feature Selection and Feature Extraction Techniques in Machine Learning. In Proceedings of the 2014 Science and Information Conference (SAI), London, UK, 27–29 August 2014; IEEE: Piscataway, NJ, USA; pp. 372–378. [Google Scholar]
  22. Chinnaswamy, A.; Srinivasan, R. Hybrid Feature Selection Using Correlation Coefficient and Particle Swarm Optimization on Microarray Gene Expression Data. In Innovations in Bio-Inspired Computing and Applications; Springer: Berlin, Germany, 2016; pp. 229–239. [Google Scholar]
  23. Nakariyakul, S. High-dimensional hybrid feature selection using interaction information-guided search. Knowl. Based Syst. 2018, 145, 59–66. [Google Scholar] [CrossRef]
  24. Sheikhpour, R.; Sarram, M.A.; Gharaghani, S.; Chahooki, M.A.Z. A survey on semi-supervised feature selection methods. Pattern Recognit. 2017, 64, 141–158. [Google Scholar] [CrossRef]
  25. Wah, Y.B.; Ibrahim, N.; Hamid, H.A.; Abdul-Rahman, S.; Fong, S. Feature Selection Methods: Case of Filter and Wrapper Approaches for Maximising Classification Accuracy. Pertanika J. Sci. Technol. 2018, 26, 329–340. [Google Scholar]
  26. Afzal, W.; Torkar, R. Towards Benchmarking Feature Subset Selection Methods for Software Fault Prediction. In Computational Intelligence and Quantitative Software Engineering; Springer: Berlin, Germany, 2016; pp. 33–58. [Google Scholar]
  27. Gao, K.; Khoshgoftaar, T.M.; Wang, H.; Seliya, N. Choosing software metrics for defect prediction: an investigation on feature selection techniques. Softw. Pract. Exp. 2011, 41, 579–606. [Google Scholar] [CrossRef]
  28. Ghotra, B.; McIntosh, S.; Hassan, A.E. A Large-Scale Study of the Impact of Feature Selection Techniques on Defect Classification Models. In Proceedings of the 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR), Buenos Aires, Argentina, 20–21 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 146–157. [Google Scholar]
  29. Xu, Z.; Liu, J.; Yang, Z.; An, G.; Jia, X. The Impact of Feature Selection on Defect Prediction Performance: An Empirical Comparison. In Proceedings of the 2016 IEEE 27th International Symposium on Software Reliability Engineering (ISSRE), Ottawa, ON, Canada, 23–27 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 309–320. [Google Scholar]
  30. Muthukumaran, K.; Rallapalli, A.; Murthy, N. Impact of feature selection techniques on bug prediction models. In Proceedings of the 8th India Software Engineering Conference, Bangalore, India, 18–20 February 2015; ACM: New York, NY, USA, 2015; pp. 120–129. [Google Scholar]
  31. Kondo, M.; Bezemer, C.-P.; Kamei, Y.; Hassan, A.E.; Mizuno, O. The impact of feature reduction techniques on defect prediction models. Empir. Softw. Eng. 2019, 1–39. [Google Scholar] [CrossRef]
  32. Rathore, S.S.; Gupta, A. A Comparative Study of Feature-Ranking and Feature-Subset Selection Techniques for Improved Fault Prediction. In Proceedings of the 7th India Software Engineering Conference, Chennai, India, 19–21 February 2014; ACM: New York, NY, USA, 2014; p. 7. [Google Scholar]
  33. Lee, S.-J.; Xu, Z.; Li, T.; Yang, Y. A novel bagging C4. 5 algorithm based on wrapper feature selection for supporting wise clinical decision making. J. Biomed. Inf. 2018, 78, 144–155. [Google Scholar] [CrossRef] [PubMed]
  34. Zemmal, N.; Azizi, N.; Sellami, M.; Zenakhra, D.; Cheriguene, S.; Dey, N.; Ashour, A.S. Robust feature selection algorithm based on transductive SVM wrapper and genetic algorithm: application on computer-aided glaucoma classification. Int. J. Intell. Syst. Technol. Appl. 2018, 17, 310–346. [Google Scholar] [CrossRef]
  35. Rodriguez, D.; Ruiz, R.; Cuadrado-Gallego, J.; Aguilar-Ruiz, J.; Garre, M. Attribute Selection in Software Engineering Datasets for Detecting Fault Modules. In Proceedings of the 33rd EUROMICRO Conference on Software Engineering and Advanced Applications (EUROMICRO 2007), Lubeck, Germany, 28–31 August 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 418–423. [Google Scholar]
  36. Kumar, C.A.; Sooraj, M.; Ramakrishnan, S. A comparative performance evaluation of supervised feature selection algorithms on microarray datasets. Procedia Comput. Sci. 2017, 115, 209–217. [Google Scholar] [CrossRef]
  37. Ibrahim, D.R.; Ghnemat, R.; Hudaib, A. Software Defect Prediction using Feature Selection and Random Forest Algorithm. In Proceedings of the 2017 International Conference on New Trends in Computing Sciences (ICTCS), Amman, Jordan, 11–13 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 252–257. [Google Scholar]
  38. Menzies, T.; Greenwald, J.; Frank, A. Data mining static code attributes to learn defect predictors. IEEE Trans. Softw. Eng. 2007, 33, 2–13. [Google Scholar] [CrossRef]
  39. Shepperd, M.; Song, Q.; Sun, Z.; Mair, C. Data quality: Some comments on the nasa software defect datasets. IEEE Trans. Softw. Eng. 2013, 39, 1208–1215. [Google Scholar] [CrossRef]
  40. Japkowicz, N.; Stephen, S. The class imbalance problem: A systematic study. Intell. Data Anal. 2002, 6, 429–449. [Google Scholar] [CrossRef]
  41. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: an update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
  42. Balogun, A.O. SDP_FS_ComparativeStudy Git Repository. 2019. Available online: https://github.com/bharlow058/SDP_FS_ComparativeStudy.git (accessed on 9 May 2019).
  43. Belouch, M.; Elhadaj, S.; Idhammad, M. A hybrid filter-wrapper feature selection method for DDoS detection in cloud computing. Intell. Data Anal. 2018, 22, 1209–1226. [Google Scholar] [CrossRef]
  44. Wohlin, C.; Runeson, P.; Höst, M.; Ohlsson, M.C.; Regnell, B.; Wesslén, A. Experimentation in Software Engineering; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  45. Gao, K.; Khoshgoftaar, T.M.; Seliya, N. Predicting high-risk program modules by selecting the right software measurements. Softw. Qual. J. 2012, 20, 3–42. [Google Scholar] [CrossRef]
Figure 1. Experimental Setup.
Figure 1. Experimental Setup.
Applsci 09 02764 g001
Figure 2. Average Accuracy for Prediction Models.
Figure 2. Average Accuracy for Prediction Models.
Applsci 09 02764 g002
Figure 3. Prediction Models with the Highest Variation.
Figure 3. Prediction Models with the Highest Variation.
Applsci 09 02764 g003
Figure 4. Performance Stability of Prediction Models (SD).
Figure 4. Performance Stability of Prediction Models (SD).
Applsci 09 02764 g004
Figure 5. Performance Stability of Prediction Models (CV).
Figure 5. Performance Stability of Prediction Models (CV).
Applsci 09 02764 g005
Figure 6. Performance of Feature Selection Method on each Dataset.
Figure 6. Performance of Feature Selection Method on each Dataset.
Applsci 09 02764 g006
Table 1. List of Filter Feature Ranking (FFR) Methods.
Table 1. List of Filter Feature Ranking (FFR) Methods.
Filter Feature Rank MethodSearch MethodCharacteristicsReference
Information Gain Attribute Evaluator (IG)Ranker SearchProbability-based[18,32]
Relief Feature Attribute Evaluator (RFA)Ranker SearchInstance-based[29,32]
Gain Ratio Attribute Evaluator (GR)Ranker SearchProbability-based[29,32]
Clustering Variation Attribute Evaluator (CV)Ranker SearchStatistics-based[18,29]
Table 2. List of Filter Feature Subset Selection (FSS) Methods.
Table 2. List of Filter Feature Subset Selection (FSS) Methods.
Filter Feature Subset Selection MethodSearch MethodReference
Correlation-based Feature Subset Selection (CFS)Best First Search (BFS)[32,36]
Greedy Stepwise Search (GSS)[29,36]
Ant Search (AS)
Bat Search (BAT)
Firefly Search (FS)
Genetic Search (GS)[32,36]
PSO Search (PSOS)
Consistency Feature Subset Selection (CNS)Best First Search (BFS)[32,36]
Greedy Stepwise Search (GSS)[29,36]
Ant Search (AS)
Bat Search (BAT)
Firefly Search (FS)
Genetic Search (GS)[32,36]
PSO Search (PSOS)
Table 3. Various Search Methods and Parameter Setting.
Table 3. Various Search Methods and Parameter Setting.
Search MethodsParameter Settings
Best First SearchDirection = Bi-directional
Greedy Stepwise SearchConservative Forward Selection = True; Search Backwards = False;
NumToSelect = log 2 N (N = Number of Features)
Ant SearchAccelerateType = Accelerate; Chaotic co-efficient = 0.4
Chaotic Parameter Type = Chaotic Map: Parameter;
Chaotic Type = logistic map; PopulationSize = 200; Phromone = 2.0
Bat SearchAccelerateType = Accelerate; Chaotic co-efficient = 0.4;
Chaotic Parameter Type = logistic Map: Parameter;
Chaotic Type = logistic map; PopulationSize = 200; loudness = 0.5
Firefly SearchAccelerateType = Accelerate; Chaotic co-efficient = 0.4;
Chaotic Parameter Type = logistic Map: Parameter;
Chaotic Type = logistic map; PopulationSize = 200; absorption = 0.001; betaMin = 0.33
Genetic SearchPopulationSize = 200; MaxGeneration = 20; crossoverProb = 0.6
PSO SearchPopulationSize = 200; IndividualWeight = 0.34; InertiaWeight = 0.33; SocialWeight = 0.33
Table 4. Classification Algorithms.
Table 4. Classification Algorithms.
ClassifiersDescriptionParameter Setting
Naïve Bayes (NB)A Bayes Theorem based classification technique.NumDecimalPlaces = 2; NumAttrEval = Normal Dist.
Decision Tree (DT)A Tree-based classification technique.Confidence factor = 0.25; MinNumObj = 2
Kernel Logistic Regression (LR)A Function-based classification technique.Kernel = PolyKernel (E = 1.0; C = 250007); lambda = 0.01; Quadratic penalty = BFGS Optimization
K Nearest Neighbor (KNN)An Instance learning-based classification technique.K = 1; NNSearch = LinearNNSearch (based on Euclidean Distance)
Table 5. Software Defect Datasets.
Table 5. Software Defect Datasets.
S/No.DatasetsLanguageNumber of FeaturesNumber of Modules
1.CM1C37327
2.KC1C++211162
3.KC3Java39194
4.MW1C37250
5.PC2C37722
Table 6. Accuracy Values of Naive Bayes Classifier on Full and Reduced Datasets.
Table 6. Accuracy Values of Naive Bayes Classifier on Full and Reduced Datasets.
Naïve Bayes (NB)ModelsPerformance Metrics (Accuracy)/DatasetAverage (%)Variation (%)
CM1KC1KC3MW1PC2
NO Feature SelectionNB81.3573.5878.8781.6090.3081.140
NB + CFS + Heuristic SearchNB+GA84.1073.8480.4184.494.1883.392.77
NB+BAT85.3274.9681.968695.0184.654.33
NB+PSO83.1873.8479.9084.894.4683.242.58
NB+FS82.8773.9280.4184.893.9183.182.52
NB+AS86.4975.1380.4185.295.5784.564.22
NB + CFS + Exhaustive SearchNB+GSS83.7973.5880.9384.493.9183.322.69
NB+BF83.1873.8479.9084.894.3283.212.55
NB + CNS + Heuristic SearchNB+GA81.9673.4979.3885.294.1882.842.10
NB+BAT81.6573.4978.878494.4682.491.67
NB+PSO82.8773.4180.4185.694.8783.432.83
NB+FS81.3573.4180.4183.693.6382.481.65
NB+AS82.8773.4979.3883.694.4682.762.00
NB + CNS + Exhaustive SearchNB+GSS81.6575.3979.908693.9183.372.75
NB+BF85.3273.3280.9384.497.7884.353.96
NB + Filter MethodNB+IG84.1074.7880.9383.693.9183.462.87
NB+RFA80.1272.6378.8784.496.6882.541.73
NB+GR84.1073.2480.4184.494.1883.272.62
NB+CV81.9674.6179.3886.495.8483.643.08
Boldface typeface indicates the highest value for each dataset.
Table 7. Accuracy Values of Decision Tree Classifier on Full and Reduced Datasets.
Table 7. Accuracy Values of Decision Tree Classifier on Full and Reduced Datasets.
Decision Tree (DT)ModelsPerformance Metrics (Accuracy)/DatasetAverage (%)Variation (%)
CM1KC1KC3MW1PC2
NO Feature SelectionDT81.0474.1879.3890.4097.5184.500
DT + CFS + Heuristic SearchDT+GA85.3275.8282.9989.697.5186.252.06
DT+BAT87.1674.9682.9990.497.7886.662.55
DT+PSO86.5475.6581.9689.297.5186.171.97
DT+FS85.9375.2280.4188.497.6585.521.21
DT+AS85.9375.1382.4789.697.5186.131.92
DT + CFS + Exhaustive SearchDT+GSS86.5474.1080.4189.297.5185.551.24
DT+BF86.5475.4781.4489.697.5186.111.91
DT + CNS + Heuristic SearchDT+GA86.2473.7580.9389.297.6585.551.24
DT+BAT85.6375.3080.4189.297.5185.611.31
DT+PSO86.2474.4480.9388.497.6585.531.22
DT+FS83.4974.4478.8789.697.6584.810.36
DT+AS85.9374.4480.9389.297.3785.571.27
DT + CNS + Exhaustive SearchDT+GSS87.1675.5680.4191.697.6586.472.33
DT+BF86.8574.4480.9389.697.7885.921.68
DT + Filter MethodDT+IG86.5474.9682.479097.6586.322.16
DT+RF86.2476.2579.9087.697.7885.551.24
DT+GR86.2475.0481.9690.497.7886.282.11
DT+CV87.4674.0178.8788.897.7885.381.04
Boldface typeface indicates the highest value for each dataset.
Table 8. Accuracy Values of Logistic Regression Classifier on Full and Reduced Datasets.
Table 8. Accuracy Values of Logistic Regression Classifier on Full and Reduced Datasets.
Logistic Regression (LR)ModelsPerformance Metrics (Accuracy)/DatasetAverage (%)Variation (%)
CM1KC1KC3MW1PC2
NO Feature SelectionLR85.3276.7682.4788.0097.0985.930
LR + CFS + Heuristic SearchLR+GA85.9375.8280.939097.5186.040.12
LR+BAT85.3275.5681.4491.297.5186.210.32
LR+PSO87.1676.1682.4789.697.6586.610.79
LR+FS86.2475.4782.999097.5186.440.60
LR+AS85.9375.4782.4788.897.7886.090.19
LR + CFS + Exhaustive SearchLR+GSS86.2476.7682.479097.5186.600.78
LR+BF86.8575.9082.4789.297.6586.410.56
LR + CNS + Heuristic SearchLR+GA87.1676.5982.9990.897.3786.981.22
LR+BAT85.0276.0881.4489.697.5185.930.00
LR+PSO85.0276.5181.4490.497.6586.200.32
LR+FS85.3276.5182.479097.2386.310.44
LR+AS84.7177.1181.9690.497.2386.280.41
LR + CNS + Exhaustive SearchLR+GSS86.5476.3381.9689.697.6586.420.57
LR+BF87.1676.5181.9690.497.7886.760.97
LR + Filter MethodLR+IG86.5475.4781.9690.497.5186.380.52
LR+RF87.1676.4280.9389.297.6586.270.40
LR+GR87.1675.8281.9688.897.0986.160.27
LR+CV86.8575.6580.4190.497.5186.160.27
Boldface typeface indicates the highest value for each dataset.
Table 9. Accuracy Values of K Nearest Neighbor Classifier on Full and Reduced Datasets.
Table 9. Accuracy Values of K Nearest Neighbor Classifier on Full and Reduced Datasets.
K Nearest Neighbor (KNN)ModelsPerformance Metrics (Accuracy)/DatasetAverage (%)Variation (%)
CM1KC1KC3MW1PC2
NO Feature SelectionKNN77.9873.2472.1683.6095.7180.540
KNN + CFS + Heuristic SearchKNN+GA77.6871.2678.878496.5481.671.40
KNN+BAT77.3772.9878.8784.496.1281.951.75
KNN+PSO80.4370.5775.778496.8181.521.21
KNN+FS81.0470.4074.238495.8481.100.70
KNN+AS78.5971.6975.268295.2980.570.03
KNN + CFS + Exhaustive SearchKNN+GSS78.9071.3477.328496.4081.591.31
KNN+BF80.4370.4075.7782.896.8181.240.87
KNN + CNS + Heuristic SearchKNN+GA81.3573.4974.7483.696.2681.891.68
KNN+BAT79.2073.7577.8484.496.1282.262.14
KNN+PSO79.5173.4978.8782.897.0982.352.25
KNN+FS80.7373.5875.2682.896.1281.701.44
KNN+AS76.1572.8176.2983.696.4081.050.63
KNN + CNS + Exhaustive SearchKNN+GSS77.6869.8878.8785.695.8481.571.28
KNN+BF85.3273.6777.3284.497.7883.703.92
KNN + Filter MethodKNN+IG77.3770.5773.7184.896.2680.540.00
KNN+RF81.6473.0668.568496.1280.680.17
KNN+GR76.7671.8674.2384.496.1280.670.17
KNN+CV77.9869.9774.7486.895.9881.090.69
Boldface typeface indicates the highest value for each dataset.
Table 10. Performance Accuracy Values of FS-based Prediction Models on CM1 dataset.
Table 10. Performance Accuracy Values of FS-based Prediction Models on CM1 dataset.
FILTER-BASED SUBSET SELECTION METHODS (FSS)
Attribute EvaluatorCfsSubsetEval (CFS)Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA784.1085.3285.9377.68
BAT585.3287.1685.3277.37
PSO883.1886.5487.1680.43
FS782.8785.9386.2481.04
AS586.4985.9385.9378.59
Exhaustive MethodGSS583.7986.5486.2478.90
BF583.1886.5486.8580.43
Average84.1386.2886.2479.20
Attribute EvaluatorConsistencySubsetEval (CNS)Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA1281.9686.2487.1681.35
BAT1281.6585.6385.0279.20
PSO682.8786.2485.0279.51
FS1581.3583.4985.3280.73
AS882.8785.9384.7176.15
Exhaustive MethodGSS681.6587.1686.5477.68
BF185.3286.8587.1685.32
Average82.5385.9385.8579.99
FILTER-BASED FEATURE RANKING METHODS (FFR)
Attribute EvaluatorSearch MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
IGRanker684.1086.5486.5477.37
RFARanker680.1286.2487.1681.64
GRRanker684.1086.2487.1676.76
CVRanker681.9687.4686.8577.98
Average82.5786.6286.9378.44
Boldface typeface indicates the highest value for each classifier.
Table 11. Performance Accuracy Values of FS-based Prediction Models on KC1 dataset.
Table 11. Performance Accuracy Values of FS-based Prediction Models on KC1 dataset.
FILTER-BASED SUBSET SELECTION METHODS (FSS)
Attribute EvaluatorCfsSubsetEval (CFS) Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA873.8475.8275.8271.26
BAT474.9674.9675.5672.98
PSO873.8475.6576.1670.57
FS473.9275.2275.4770.40
AS275.1375.1375.4771.69
Exhaustive MethodGSS673.5874.1076.7671.34
BF873.8475.4775.9070.40
Average74.1675.1975.8871.23
Attribute EvaluatorConsistencySubsetEval (CNS) Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA1173.4973.7576.5973.49
BAT1773.4975.3076.0873.75
PSO1673.4174.4476.5173.49
FS1673.4174.4476.5173.58
AS1773.4974.4477.1172.81
Exhaustive MethodGSS675.3975.5676.3369.88
BF1673.3274.4476.5173.67
Average73.7274.6376.5272.95
FILTER-BASED FEATURE RANKING METHODS (FFR)
Attribute EvaluatorSearch MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
IGRanker674.7874.9675.4770.57
RFARanker672.6376.2576.4271.86
GRRanker673.2475.0475.8271.86
CVRanker674.6174.0175.6569.97
Average73.8275.0675.8471.06
Boldface typeface indicates the highest value for each classifier.
Table 12. Performance Accuracy Values of FS-based Prediction Models on KC3 dataset.
Table 12. Performance Accuracy Values of FS-based Prediction Models on KC3 dataset.
FILTER-BASED SUBSET SELECTION METHODS (FSS)
Attribute EvaluatorCfsSubsetEval (CFS)Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA280.4182.9980.9378.87
BAT281.9682.9981.4478.87
PSO379.9081.9682.4775.77
FS380.4180.4182.9974.23
AS280.4182.4782.4775.26
Exhaustive MethodGSS680.9380.4182.4777.32
BF379.9081.4482.4775.77
Average80.5681.8182.1876.58
Attribute EvaluatorConsistencySubsetEval (CNS)Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA979.3880.9382.9974.74
BAT1778.8780.4181.4477.84
PSO680.4180.9381.4478.87
FS1280.4178.8782.4775.26
AS1379.3880.9381.9676.29
Exhaustive MethodGSS679.9080.4181.9678.87
BF580.9380.9381.9677.32
Average79.9080.4982.0377.03
FILTER-BASED FEATURE RANKING METHODS (FFR)
Attribute EvaluatorSearch MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
IGRanker680.9382.4781.9673.71
RFARanker678.8779.9080.9367.53
GRRanker680.4181.9681.9674.23
CVRanker679.3878.8780.4174.74
Average79.9080.8081.3172.55
Boldface typeface indicates the highest value for each classifier.
Table 13. Performance Accuracy Values of FS-based Prediction Models on MW1 dataset.
Table 13. Performance Accuracy Values of FS-based Prediction Models on MW1 dataset.
FILTER-BASED SUBSET SELECTION METHODS (FSS)
Attribute EvaluatorCfsSubsetEval (CFS) Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA884.489.69084
BAT98690.491.284.4
PSO784.889.289.684
FS984.888.49084
AS785.289.688.882
Exhaustive MethodGSS684.489.29084
BF784.889.689.282.8
Average84.9189.4389.8383.60
Attribute EvaluatorConsistencySubsetEval (CNS)Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA1185.289.290.883.6
BAT178489.289.684.4
PSO885.688.490.482.8
FS1783.689.69082.8
AS1383.689.290.483.6
Exhaustive MethodGSS68691.689.685.6
BF984.489.690.484.4
Average84.6389.5490.1783.89
FILTER-BASED FEATURE RANKING METHODS (FFR)
Attribute EvaluatorSearch MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
IGRanker683.69090.484.8
RFARanker684.487.689.284
GRRanker684.490.488.884.4
CVRanker686.488.890.486.8
Average84.7089.2089.7085.00
Boldface typeface indicates the highest value for each classifier.
Table 14. Performance Accuracy Values of FS-based Prediction Models on PC2 dataset.
Table 14. Performance Accuracy Values of FS-based Prediction Models on PC2 dataset.
FILTER-BASED SUBSET SELECTION METHODS (FSS)
Attribute EvaluatorCfsSubsetEval (CFS) Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA594.1897.5197.5196.54
BAT595.0197.7897.5196.12
PSO594.4697.5197.6596.81
FS593.9197.6597.5195.84
AS695.5797.5197.7895.29
Exhaustive MethodGSS693.9197.5197.5196.40
BF594.3297.5197.6596.81
Average94.4897.5797.5996.26
Attribute EvaluatorConsistencySubsetEval (CNS)Method
Search MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
Heuristics MethodGA1594.1897.6597.3796.26
BAT1794.4697.5197.5196.12
PSO994.8797.6597.6597.09
FS1793.6397.6597.2396.12
AS1694.4697.3797.2396.40
Exhaustive MethodGSS693.9197.6597.6595.84
BF197.7897.7897.7897.78
Average94.7697.6197.4996.52
FILTER-BASED FEATURE RANKING METHODS (FFR)
Attribute EvaluatorSearch MethodsNo. of FeaturesPerformance Metrics (Accuracy)/Classifier
NBDTLRKNN
IGRanker693.9197.6597.5196.26
RFARanker696.6897.7897.6596.12
GRRanker694.1897.7897.0996.12
CVRanker695.8497.7897.5195.98
Average95.1597.7597.4496.12
Boldface typeface indicates the highest value for each classifier.
Table 15. Performance (Accuracy) Variation of Prediction Models with different FS methods.
Table 15. Performance (Accuracy) Variation of Prediction Models with different FS methods.
FS MethodsSearch MethodsNBVariation (%)DTVariation (%)LRVariation (%)KNNVariation (%)
No FS-81.14084.50085.93080.540
FSS(CFS)GA83.392.7786.252.0686.040.1281.671.40
BAT84.654.3386.662.5586.210.3281.951.75
PSO83.242.5886.171.9786.610.7981.521.21
FS83.182.5285.521.2186.440.6081.100.70
AS84.564.2286.131.9286.090.1980.570.03
GSS83.322.6985.551.2486.600.7881.591.31
BF83.212.5586.111.9186.410.5681.240.87
FSS(CNS)GA82.842.1085.551.2486.981.2281.891.68
BAT82.491.6785.611.3185.930.0082.262.14
PSO83.432.8385.531.2286.200.3282.352.25
FS82.481.6584.810.3686.310.4481.701.44
AS82.762.0085.571.2786.280.4181.050.63
GSS83.372.7586.472.3386.420.5781.571.28
BF84.353.9685.921.6886.760.9783.703.92
FFRIG83.462.8786.322.1686.380.5280.540.00
RFA82.541.7385.551.2486.270.4080.680.17
GR83.272.5886.282.0886.160.2780.670.17
CV83.643.0385.381.0386.160.2781.090.69
Boldface typeface indicates the highest value for each classifier.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top