Next Article in Journal
Coupling of Bio-Reactors to Increase Maximum Sustainable Yield
Next Article in Special Issue
Research on the Path of Policy Financing Guarantee to Promote SMEs’ Green Technology Innovation
Previous Article in Journal
Applying Integrated QFD-MCDM Approach to Strengthen Supply Chain Agility for Mitigating Sustainable Risks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beam-Influenced Attribute Selector for Producing Stable Reduct

School of Computer, Jiangsu University of Science and Technology, Zhenjiang 212100, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(4), 553; https://doi.org/10.3390/math10040553
Submission received: 22 January 2022 / Revised: 5 February 2022 / Accepted: 8 February 2022 / Published: 11 February 2022
(This article belongs to the Special Issue Data Mining: Analysis and Applications)

Abstract

:
Attribute reduction is a critical topic in the field of rough set theory. Currently, to further enhance the stability of the derived reduct, various attribute selectors are designed based on the framework of ensemble selectors. Nevertheless, it must be pointed out that some limitations are concealed in these selectors: (1) rely heavily on the distribution of samples; (2) rely heavily on the optimal attribute. To generate the reduct with higher stability, a novel beam-influenced selector (BIS) is designed based on the strategies of random partition and beam. The scientific novelty of our selector can be divided into two aspects: (1) randomly partition samples without considering the distribution of samples; (2) beam-based selections of features can save the selector from the dependency of the optimal attribute. Comprehensive experiments using 16 UCI data sets show the following: (1) the stability of the derived reducts may be significantly enhanced by using our selector; (2) the reducts generated based on the proposed selector can provide competent performance in classification tasks.

1. Introduction

In the era of big data and artificial intelligence, the scale of data is growing massively [1]. Without loss of generality, high dimensionality has become one of the crucial characteristics of modern data [2]. For data analyses related to practical applications [3,4], such a characteristic may bring us huge challenges, which can be divided into the following two phases: (1) in the process of data production, redundant or irrelevant dimensions may lead to the low quality of data; (2) the storage and processing of data are facing tremendous difficulties because of the interference influenced by the numerous dimensions. Therefore, how to reasonably reduce the dimensions of data has become an urgent problem to be addressed.
As one of the effective technologies for realizing dimension reduction, feature selection [5,6,7,8] has been widely explored. Different from some other dimension reduction technologies, such as feature extraction, the aim of feature selection is to determine some informative features and then select them from the original features. Through using feature selection, it is expected that the selected features will provide better readability and interpretability [9,10] for the learning models. Presently, with respect to diverse requirements, various approaches with respect to feature selection are developed. Among most of the popular results, it is noteworthy that rough-set-based feature selection has been paid much attention. The reasons can be attributed to the following superiorities: (1) clear semantic explanation for such a technique is useful in terminating the process of feature selection; (2) the extensibility of rough set theory is extremely important because it offers us rich measurements for evaluating the significance of features.
Concerning rough-set-based feature selection, the output is frequently referred to as the reduct because such a feature selection is firstly named as attribute reduction by Pawlak [11] in the theory of rough set [11,12,13,14]. Through reviewing numerous results with respect to attribute reduction, it can be observed that most of the research is motivated by either the time efficiency of seeking the reduct or the generalization ability of the obtained reduct [15,16,17,18]. Nevertheless, to the best of our knowledge, few of the studies take the data perturbation into account. In other words, the stability of the obtained reduct [19,20] is seldom considered.
The stability of the reduct is defined as “sensitivity of reducts generated by an algorithm to the differences of training sets drawn from same generation distribution”. In simple terms, the reduct stability reflects the varying degree of reducts when data perturbation happens. The reduct with higher stability will bring us stable learning results, and it can enhance the confidence of domain experts when experimentally validating the selected attributes to interpret important discoveries. It is obvious that stability is one important metric for evaluating the performance of derived reducts. Therefore, the main research problem of this study is how to generate the reduct with higher stability when data perturbation happens.
In view of searching for a stable reduct, Yang and Yao [21] have firstly designed a naive ensemble selector. Though the framework of such a selector is rough, the multiple perspectives [22,23] for determining attributes with better adaptability have been preliminarily reported. Immediately, the reduct generated by using an ensemble selector may be more robust than those obtained by previous research.
However, through a critical review of the results related to ensemble selectors, some limitations may be easily revealed: (1) such a selector relies heavily on the distribution of samples; (2) such a selector also relies heavily on optimal attributes of each perspective. The former indicates that the multiple perspectives in the ensemble selector are constructed based on the distribution of samples. From this point of view, the critical requirement of following the raw distribution may greatly limit the applications of ensemble selectors in complex data. The latter implies the determination of the best attributes for each iteration in the mechanism of an ensemble selector. Therefore, the locally optimal solution instead of the globally optimal solution may be derived.
By considering the above limitations and the requirement of deriving a stable reduct, a beam-influenced attribute selector will be designed in the context of this paper. Such a selector contains two main keys: (1) randomly partition the samples [24], it can be regarded as the source of multiple beams; (2) add two or more potential attributes into each beam [25]. Therefore, different from the conventional ensemble selector, our selector is neither required to preserve the raw distribution of samples because the random partition over samples is employed nor asked to determine one and only one satisfactory attribute in each iteration because the beam is used to record multiple candidate attributes. Immediately, it is obvious that our beam-influenced attribute selector presents two different perspectives of the ensemble, one is from the perspective of the sample and the other is from the perspective of attributes. Therefore, more bases for realizing ensemble selection can be obtained and then the attribute with stronger adaptability can be determined. This is the inherent reason that our beam-influenced attribute selector may output a stable reduct.
The contribution of this study can be divided into the three phases as follows. Firstly, through analyzing the framework of an ensemble selector, some limitations of this framework are revealed. These limitations can be summarized into two aspects: (1) rely heavily on the distribution of samples; (2) rely heavily on the selection of optimal features. Secondly, to overcome these limitations in the ensemble selector, a novel beam-influenced selector (BIS) is developed. Finally, through the observation and analysis of extensive experimental results, it is verified that our selector can be effectively used to generate stable reducts, and these reducts generally possess competent classification performance.
The remainder of this paper is organized as follows. In Section 2, some basic concepts with respect to attribute reduction and measurement of stability are introduced briefly. In Section 3, ensemble selector-based attribute reduction and beam-influenced selector-based attribute reduction are presented elaborately. In Section 4, experimental results and corresponding analyses are reported clearly. Finally, some conclusions and suggestions for future work are offered in Section 5.

2. Preliminaries

2.1. Attribute Reduction

Currently, the relationship between attributes and labels not only provides guidance for constructing various attribute reduction-related constraints [11,26,27,28] in the field of attribute reduction [29,30,31] but also suggests positive heuristic information for generating a reduct based on appropriate attributes.
Up to now, it is well-known that many relationships and the corresponding constraints have been thoroughly explored and that the forms of attribute reduction vary considerably [32,33,34,35]. Different reducts possess different semantic explanations, which are closely determined by the used relationships.
However, though the forms of relationships and constraints are rich in previous research, it must be noted that the essence of attribute reduction can be further abstracted. Such an abstraction can not only reveal the clear framework of attribute reduction but also provide a broad space for introducing popular techniques into the study of attribute reduction-related topics. The following Definition 1 shows us a detailed abstraction [15].
Definition 1.
[36] Given a decision system DS = U , A T { d } , in which A T and U are nonempty finite sets with respect to condition attributes and all samples, d is the decision attribute. Suppose that C ρ U is a constraint associated with a pre-defined measure ρ, A A T , A is considered as a ρ-reduct if and only if the following conditions are satisfied:
(1) 
A holds the constraint C ρ U ;
(2) 
B A , B does not hold the constraint C ρ U .
In Def. 1, ρ can be considered as a function that can map P ( U ) × P ( A T ) to a set of real numbers R , where P ( . ) is an operation of the power set.
Immediately, the natural problem is how to generate the satisfactory reduct. Because of its low time complexity, the heuristic-based greed searching strategy is paid much attention to. Most of the greedy searchings [30,37,38] can be grouped into the following three phases.
(1)
Forward greedy searching. For each iteration, one or more appropriate attributes can be selected based on the evaluations of the candidate attributes. Thereby, through using sufficient iterations, a satisfactory reduct can be generated.
(2)
Backward greedy searching. For each iteration, one or more inferior attributes will be removed from the set of the raw attributes based on the evaluations. Then, through using sufficient iterations, a justifiable reduct can also be obtained.
(3)
Forward-Backward greedy searching. In such a strategy, both the forward and backward strategies are employed for seeking a reasonable reduct. For example, firstly, a potential reduct can be generated based on the strategy of forward searching; secondly, the redundant attributes in such a potential reduct can be further removed by the backward searching.
Among the above strategies, forward-backward greedy searching is the well-trodden strategy. The reasons can be summarized as follows: such strategy not only guarantees that the informative attributes are added into the potential reduct preferentially but also makes sure that there are no redundant attributes in the obtained reduct. The details of forward-backward greedy searching are presented as follows.
In Algorithm 1, a A T , the fitness value will be calculated by using the fitness function ϕ ( ϕ map a to the set of real numbers) that can quantitatively characterize the significance of each candidate attribute. It must be pointed out that the fitness function is generally associated with the measure ρ [36].
Algorithm 1: Forward-backward greedy searching for attribute reduction (FBGSAR).
Mathematics 10 00553 i001

2.2. Measurement of Stability

Following the above discussions, testing the performances of the derived reduct is an inevitable problem. In addition to generalization performances related to classifiers, the stability of the derived reduct is another performance that should be paid attention [21].
For instance, suppose that data perturbation results in significant variations of the reducts, it means that the generated reducts are unstable. Obviously, unstable reducts will lend to unstable results of learning.
In this paper, the stability represents the degree of reduct perturbation when data perturbation happens. From this point of view, the reduct stability [39,40] is defined as Definition 2.
Definition 2.
[21] Given a decision system DS , if U is split into U 1 , U 2 , , U m ( | U 1 | = | U 2 | = = | U m | ), then the stability of the reduct is
Sta Reducet = 1 m ( m 1 ) i = 1 m j = 1 , j i m | A i A j | | A i A j | ,
where A i is the reduct obtained over U U i .

3. Beam-Influenced Selector for Attribute Reduction

3.1. Ensemble Selector for Attribute Reduction

As shown in Algorithm 1, each candidate attribute is evaluated based on a single fitness function. Nevertheless, it is well-known that a single fitness function will bring the following challenges [41].
(1)
Single fitness function may lead to poorer adaptability. For example, a reduct generated based on a single granularity may fail to qualify as the reduct over other granularity, such as a slightly finer or coarser granularity [41] generated by a slight data perturbation.
(2)
Single fitness function may result in a poorer learning performance. For instance, in different classes, samples possess some distinct characteristics [31] that tend to optimize class-specific measurements. Nevertheless, revealing the differences among these differentiated characteristics using merely one fitness function is quite challenging.
To overcome the limitations mentioned above, a representative ensemble selector based attribute reduction has been designed by Yang and Yao [21]. Different from Algorithm 1, a set of fitness functions is used in the ensemble selector for evaluating each candidate attribute. The detailed process of the searching reduct based on the ensemble selector is shown as follows.
Actually, the fitness function is designed to measure the significance of each candidate attribute. Therefore, different perspectives can be constructed by using different fitness functions. For such a reason, the fitness function sets used in Algorithm 2 can be grouped into the following two broad categories.
(1)
Homogenous fitness function set: the set of fitness function { ϕ 1 , ϕ 2 , , ϕ n } is constructed using same evaluation criterion. For instance, the fitness function set can be defined based on approximation qualities of different rough set models, and then the generated can better adapt to different models.
(2)
Heterogeneous fitness function set: the set of fitness function { ϕ 1 , ϕ 2 , , ϕ n } is built using different evaluation criteria. For example, the fitness function set can be defined based on many different measures of a rough set model, such as approximation quality and entropy, etc., and then, the derived reduct will better adapt to different constraints.
Algorithm 2: Ensemble selector-based attribute reduction (ESAR).
Mathematics 10 00553 i002
Algorithm 2 has substituted the set of fitness functions for a single fitness function. Therefore, compared with the conventional approach, the ensemble selector-based strategy may effectively improve the stability of the obtained reducts. The crucial reason is that the candidate attributes can be evaluated based on multiple perspectives, which makes the selected attributes may be equipped with more adaptability and generality.
Following the basic principle of Algorithm 2, some different forms of ensemble selectors [21,41,42] have also been investigated. However, it must be pointed out that most of these results suffer from a couple of limitations shown as follows.
(1)
Rely heavily on the distribution of samples. Take the classical ensemble selector proposed by Yang and Yao [21] as an example; each fitness function is constructed based on the samples with the same label. Therefore, the performance of the used fitness functions will be degraded if sample distribution is seriously unbalanced or the categories of samples are fewer.
(2)
Rely heavily on the selection of appropriate features. Take the selector proposed by Jiang et al. [42] as an example, only the optimal attribute will be selected based on each fitness function. Therefore, some candidate attributes with potential importance will be ignored, which indicates that some attributes with strong adaptability will be difficult to determine.

3.2. Beam-Influenced Selector-Based Attribute Reduction

Considering what has been pointed out in the above subsection, those limitations may lead to poor adaptability of the selected attributes. Motivated by this, two strategies called random partition [24] and beam [25] are employed in our attribute selector. The detailed structure of the beam-influenced selector (BIS) is shown in the following Figure 1.
The details of our beam-influenced selector (BIS) shown in Figure 1 can be elaborated as follows:
(1)
Divide the set of raw data into n ( n { 1 , 2 , , | u | / 5 } ) groups in terms of the samples;
(2)
Different fitness functions ϕ 1 , ϕ 2 , , ϕ n are constructed based on different n groups of samples;
(3)
Each candidate attribute can be evaluated by different fitness functions, and then the top w % ( 0 < w 50 ) of the attributes with respect to the evaluation results over each fitness function will be added into the multiset T;
(4)
Select an attribute b with the maximal frequency of occurrences in the multiset T.
Following the above discussions, our searching strategy may be equipped with the following superiorities. Firstly, it will not be influenced by the distribution of samples. This is mainly because the whole universe is randomly partitioned into n different groups and then the distribution of each local datum is not the key thing that should be paid the most attention. Secondly, more important attributes will be considered and added into the multiset T. Those attributes are determined by not only various local data-based fitness functions but also the beam-based top w % selection related to each local data. Immediately, the attribute with stronger adaptability may be selected for each iteration.
The following Algorithm 3 shows the specific process of deriving a reduct based on our beam-influenced selector.
Algorithm 3: Beam-influenced selector-based attribute reduction (BISAR).
Mathematics 10 00553 i003
In Algorithm 3, the set of homogeneous fitness functions is used, principally because: (1) those different fitness functions correspond to the local data derived from the random partition over the raw data; (2) although the local data are different, the evaluation mechanisms over those local data are same.
Compared with previous studies, it is obvious that the number of ensemble members can be adjusted freely by both the number of local data/fitness functions and the number of the beam-based top attributes. Put another way, each candidate attribute can be fully evaluated by more views. As a result, it is expected that the reduct generated by using Algorithm 3 may have a higher stability.

4. Experimental Analysis

4.1. Data Sets and Configuration

For verifying the validity and superiority of the proposed strategy, experimental comparisons are conducted in this section. All experiments were carried out on a personal computer with Windows 10, AMD 3750 (2.60 GHz) and 4.00 GB memory. The programming language is Matlab R2018b. Classifier packages (fitcecoc, fitcknn) used in the experiment are from the software of Matlab. The total run time of all experiments was about one month. Furthermore, Table 1 summarizes details of 16 UCI data sets that were used in our experiments.

4.2. Experimental Setup

The model that was used for deriving reducts during our experimentations is the neighborhood rough set [26,43]. Note that the results obtained using the neighborhood rough set model are closely dependent on the given radius. For such reason, to verify the universality of BIEAR, 20 radii were selected for the experiment; these are 0.02, 0.04, …, 0.40 [15,24,44].
Furthermore, the 5-fold cross-validation was applied for searching reducts. The strategy of 5-fold cross-validation will divide the whole samples into 5 disjoint groups, i.e., U 1 , U 2 , , U 5 . For the first round of computation, U 1 U 4 is considered as the set of training samples for searching for the reduct and then U 5 is considered as the set of testing samples for evaluating performances with respect to the generated reduct; …; U 2 U 5 is considered as the set of training samples to search for the reduct and then U 1 is considered as the set of testing samples for evaluating performances with respect to the generated reduct [44]. Therefore, the experimental studies reported in this paper are quite reliable and aim to provide a solid basis for a thorough evaluation of BISAR’s effectiveness.
Moreover, it is worth noting that besides the above raw data sets, two types of noise have also been injected into the raw data for further testing the effectiveness of our BISAR.
(1)
Feature noise. Given raw data, if the noise ratio is ω % , then the injection is realized by randomly selecting ω % ( 0 < ω < 100 ) features and replacing the values over these features with random numbers (value range is [0, 1]).
(2)
Label noise. Given raw data, if the noise ratio is ω % , then the injection is realized by randomly selecting ω % ( 0 < ω < 100 ) samples and replacing the labels of these samples randomly.
Finally, the feature noise ratio and the label noise ratio are set as 20%, the hyperparameters n and w of BISAR are set as 20. The following five strategies for obtaining reducts are also reproduced for comparing with our BISAR.
(1)
Attribute group based attribute reduction (AGAR) [15];
(2)
Dissimilarity based attribute reduction (DAR) [45];
(3)
Data-guidance based attribute reduction (DGAR) [42];
(4)
Ensemble selector-based attribute reduction (ESAR) [21];
(5)
Forward-backward greedy searching based attribute reduction (FBGSAR) [15].

4.3. Comparisons of Stability-Based Reducts

In this subsection, the stabilities of different reducts driven by AGAR, DAR, DGAR, ESAR, FBGSAR and our proposed strategy will be compared. The stability is reflected by how data perturbation will influence the reduct. Therefore, stabilities of reducts obtained by different strategies can be computed based on 5-fold cross-validation. The computation of stability has been shown in Equation (1). Since 20 different radii are used to obtain reducts in our experiments, the following Table 2, Table 3 and Table 4 show us the mean value of stabilities based on 20 different reducts.
Following Table 2, Table 3 and Table 4, it is not difficult for us to observe for following.
  • Compared with reducts generated by AGAR, DAR, DGAR, ESAR and FBGSAR, the reduct obtained by our proposed strategy can possess higher stability in most cases over raw data. Take the “LSVT Voice Rehabilitation (ID: 6)” data set as an example, the values with respect to stabilities of reducts obtained by AGAR, DAR, DGAR, ESAR, FBGSAR and our proposed strategy are 0.1003, 0.2194, 0.1133, 0.1380, 0.1133 and 0.6188, respectively. It is obvious that the reduct with higher stability can be effectively generated by our BISAR.
  • Whether the label noise data or feature noise data are considered, the reduct obtained by our proposed strategy can always possess high stability. Take “LSVT Voice Rehabilitation (ID: 6)” data set as an example, over the label noise data, the values with respect to stabilities of reducts obtained by AGAR, DAR, DGAR, ESAR, FBGSAR and our proposed strategy are 0.0600, 0.1926, 0.0650, 0.0838, 0.0792 and 0.5738; over the feature noise data, the values are 0.0787, 0.1938, 0.0851, 0.1070, 0.1127 and 0.6256. It is not difficult to draw a conclusion that our proposed strategy can better adapt to the data with label noise or feature noise.
In addition, the Wilcoxon signed rank test is employed to characterize the differences of the stability among several strategies, in which the significance level is appointed as 0.05. The results with respect to the Wilcoxon signed rank test are shown in the following Table 5, Table 6 and Table 7.
Through carefully observing Table 5, Table 6 and Table 7, it is obvious that the returned p-values are less than 0.05 in most cases. Additionally, based on the results shown in Table 2, Table 3 and Table 4. It is obvious that the stability of derived reduct by using BISAR is significantly higher than that by using other approaches.

4.4. Comparisons of Classification Performances

In this subsection, the classification performances of different reducts derived by AGAR, DAR, DGAR, ESAR, FBGSAR and our proposed strategy will be compared. The k-nearest neighbor (KNN) classifier and the support vector machine (SVM) classifier are employed to test the classification performance.
Presently, although many classifiers have been designed [46,47,48], the SVM classifier and KNN classifier are still the two most commonly used in the field of feature selection [29,41,42,45,49]. The SVM classifier is a nonlinear classifier based on a sparse kernel. Such a classifier can map the data space from low-dimensional to high-dimensional and convert nonlinearly separable data into linearly separable data. Eventually, the task of classification will be completed based on the linearly separable data. Note that linear kernel is used in our experiment [49]. The KNN classifier is representative of lazy learning. Such classifier will calculate the distance between the test instance and each instance in the train set based on distance measures (such as Euclidean distance and Hamming distance), then k nearest neighbors of each test instance can be selected. Finally, the test instances will be classified based on the rule of the classification decision. In our experiment, the value of k is set to 5 [49].
Since 20 different radii are used to generate reducts in experiments, the following Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13 show the mean accuracy related to 20 different reducts.
With a thorough investigation of Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, we can observe that BISAR will not result in poorer classification accuracy over both KNN and SVM classifiers in most cases, whichever raw data or noise data are considered. Take the “Sonar (ID: 13)” data set as an example, over the raw data, the values with respect to classification accuracies based on the KNN classifier of reducts obtained by AGAR, DAR, DGAR, ESAR, FBGSAR and our proposed strategy are 0.7400, 0.6963, 0.7351, 0.7534, 0.7351 and 0.8176; over the label noise data, the values are 0.6646, 0.6251, 0.6510, 0.6663, 0.6610 and 0.7090; over the feature noise data, the values are 0.7215, 0.6805, 0.7146, 0.7217, 0.7212 and 0.7873. Obviously, the reduct obtained by our proposed strategy can always possess a justifiable classification ability.
Furthermore, the Wilcoxon signed rank test is also employed to compare the classification accuracies. The results with respect to the Wilcoxon signed rank test are shown in the following Table 14, Table 15 and Table 16.
From Table 14, Table 15 and Table 16, in terms of the KNN classifier and SVM classifier, no matter which algorithm is compared with our BISAR, the returned p-values are higher than 0.05 in most cases. This result further illustrates that the reduct generated based on BISAR can provide well-matched performance in classification tasks.

5. Conclusions, Limitations, and Future Research

To generate a reduct with higher stability when data perturbation happens, the beam-influenced selector (BIS) is designed in this study. Different from other popular selectors: on the one hand, our selector will not consider the original samples’ distribution because the attribute evaluation is based on the local data obtained by the strategy of random partition; on the other hand, our selector will not rely heavily on the optimal attribute because each candidate attribute can be fully considered based on the strategy of the beam. Therefore, the attribute with stronger adaptability can be selected by using BIS, and then the reduct generated based on our selector will possess higher stability. The experimental results verify that our proposed selector can significantly enhance the stability of the derived reduct, and it will not lead to a poor generalization ability of the reduct. However, in terms of the hyperparameters selection and the time consumption of deriving the reduct, some limitations existed in our strategy. Consequently, the following topics deserve further investigation.
  • The time consumption of deriving a reduct may be further reduced through fusing some acceleration strategies [15,18];
  • The hyperparameters selection will be further optimized based on some parameter optimization approaches;
  • The effectiveness of our strategy can be further verified by comparing other state-of-the-art strategies [50,51,52] of feature selection.

Author Contributions

Data curation, W.Y.; methodology, W.Y.; software, W.Y.; supervision, T.X.; visualization, W.Y.; writing—original draft, W.Y.; writing—review & editing, W.Y., J.B., T.X., H.Y., J.S. and B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of China (Nos. 62076111, 62006099, 62006128, 61906078), the Postgraduate Research and Practice Innovation Program of Jiangsu Province (No. KYCX21_3507).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. These data can be found here: http://archive.ics.uci.edu/ml/datasets (accessed on 10 January 2022). The code with respect to this study can be found here: https://github.com/syscode-yxb/yww-experimentCode (accessed on 10 January 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AGARattribute group based attribute reduction
BISbeam-influenced selector
BISARbeam-influenced selector-based attribute reduction
DARdissimilarity based attribute reduction
DGARdata-guidance based attribute reductio
ESARensemble selector-based attribute reduction
FBGSARforward-backward greedy searching based attribute reduction
KNNk-nearest neighbor
SVMsupport vector machine

References

  1. Xu, W.H.; Yu, J.H. A novel approach to information fusion in multi-source datasets: A granular computing viewpoint. Inf. Sci. 2017, 378, 410–423. [Google Scholar] [CrossRef]
  2. Emani, C.K.; Cullot, N.; Nicolle, C. Understandable big data: A survey. Comput. Sci. Rev. 2015, 17, 70–81. [Google Scholar] [CrossRef]
  3. Xu, W.H.; Li, W.T. Granular computing approach to two-way learning based on formal concept analysis in fuzzy datasets. IEEE Trans. Cyber. 2016, 46, 366–379. [Google Scholar] [CrossRef] [PubMed]
  4. Yuan, K.H.; Xu, W.H.; Li, W.T.; Ding, W.Q. An incremental learning mechanism for object classificationbased on progressive fuzzy three-way concept. Inf. Sci. 2022, 584, 127–147. [Google Scholar] [CrossRef]
  5. Elaziz, M.A.; Abualigah, L.; Yousri, D.; Oliva, D.; Al-Qaness, M.A.A.; Nadimi-Shahraki, M.H.; Ewees, A.A.; Lu, S.; Ibrahim, R.A. Boosting atomic orbit search using dynamic-based learning for feature selection. Mathematics 2021, 9, 2786. [Google Scholar] [CrossRef]
  6. Khurma, R.A.; Aljarah, I.; Sharieh, A.; Elaziz, M.A.; Damaševičius, R.; Krilavičius, T. A review of the modification strategies of the nature inspired algorithms for feature selection problem. Mathematics 2022, 10, 464. [Google Scholar] [CrossRef]
  7. Li, J.D.; Liu, H. Challenges of feature selection for big data analytics. IEEE Intell. Syst. 2017, 32, 9–15. [Google Scholar] [CrossRef] [Green Version]
  8. Pérez-Martín, A.; Pérez-Torregrosa, A.; Rabasa, A.; Vaca, M. Feature selection to optimize credit banking risk evaluation decisions for the example of home equity loans. Mathematics 2020, 8, 1971. [Google Scholar] [CrossRef]
  9. Cai, J.; Luo, J.W.; Wang, S.L.; Yang, S. Feature selection in machine learning: A new perspective. Neurocomputing 2018, 300, 70–79. [Google Scholar] [CrossRef]
  10. Li, Y.; Li, T.; Liu, H. Recent advances in feature selection and its applications. Knowl. Inf. Syst. 2017, 53, 551–577. [Google Scholar] [CrossRef]
  11. Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Kluwer Academic Publishers: Dordrecht, Netherlands, 1992. [Google Scholar]
  12. Ju, H.R.; Yang, X.B.; Yu, H.L.; Li, T.J.; Yu, D.J.; Yang, J.Y. Cost-sensitive rough set approach. Inf. Sci. 2016, 355–356, 282–298. [Google Scholar] [CrossRef]
  13. Liu, D.; Yang, X.; Li, T.R. Three-way decisions: Beyond rough sets and granular computing. Int. J. Mach. Learn. Cybern. 2020, 11, 989–1002. [Google Scholar] [CrossRef]
  14. Wang, C.Z.; Huang, Y.; Shao, M.W.; Fan, X.D. Fuzzy rough set-based attribute reduction using distance measures. Knowl. Based Syst. 2019, 164, 205–212. [Google Scholar] [CrossRef]
  15. Chen, Y.; Liu, K.Y.; Song, J.J.; Fujita, H.; Yang, X.B.; Qian, Y.H. Attribute group for attribute reduction. Inf. Sci. 2020, 535, 64–80. [Google Scholar] [CrossRef]
  16. Liu, K.Y.; Yang, X.B.; Yu, H.L.; Fujita, H.; Chen, X.J.; Liu, D. Supervised information granulation strategy for attribute reduction. Int. J. Mach. Learn. Cybern. 2020, 11, 2149–2163. [Google Scholar] [CrossRef]
  17. Liu, K.Y.; Yang, X.B.; Yu, H.L.; Mi, J.S.; Wang, P.X.; Chen, X.J. Rough set based semi-supervised feature selection via ensemble selector. Knowl. Based Syst. 2019, 165, 282–296. [Google Scholar] [CrossRef]
  18. Qian, Y.H.; Liang, J.Y.; Pedrycz, W.; Dang, C.Y. Positive approximation: An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef] [Green Version]
  19. Du, W.; Cao, Z.B.; Song, T.C.; Li, Y.; Liang, Y.C. A feature selection method based on multiple kernel learning with expression profiles of different types. BioData Min. 2017, 10, 4. [Google Scholar] [CrossRef] [Green Version]
  20. Goh, W.W.B.; Wong, L. Evaluating feature-selection stability in next generation proteomics. J. Bioinform. Comput. Biol. 2016, 14, 1650029. [Google Scholar] [CrossRef] [Green Version]
  21. Yang, X.B.; Yao, Y.Y. Ensemble selector for attribute reduction. Appl. Soft Comput. 2018, 70, 1–11. [Google Scholar] [CrossRef]
  22. Wu, W.Z.; Leung, Y. A comparison study of optimal scale combination selection in generalized multi-scale decision tables. Int. J. Mach. Learn. Cybern. 2020, 11, 961–972. [Google Scholar] [CrossRef]
  23. Wu, W.Z.; Qian, Y.H.; Li, T.J.; Gu, S.M. On rule acquisition in incomplete multi-scale decision tables. Inf. Sci. 2017, 378, 282–302. [Google Scholar] [CrossRef]
  24. Chen, Z.; Liu, K.Y.; Yang, X.B.; Fujitae, H. Random sampling accelerator for attribute reduction. Int. J. Approx. Reason. 2022, 140, 75–91. [Google Scholar] [CrossRef]
  25. Freitag, M.; Al-Onaizan, Y. Beam search strategies for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, Vancouver, BC, Canada, 4 August 2017; pp. 56–60. [Google Scholar]
  26. Hu, Q.H.; Yu, D.R.; Xie, Z.X. Neighborhood classifiers. Expert Syst. Appl. 2008, 34, 866–876. [Google Scholar] [CrossRef]
  27. Wang, C.Z.; Hu, Q.H.; Wang, X.Z.; Chen, D.G.; Qian, Y.H.; Dong, Z. Feature selection based on neighborhood discrimination index. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2986–2999. [Google Scholar] [CrossRef] [PubMed]
  28. Zhang, X.; Mei, C.L.; Chen, D.G.; Li, J.H. Feature selection in mixed data: A method using a novel fuzzy rough set-based information entropy. Pattern Recognit. 2016, 56, 1–15. [Google Scholar] [CrossRef]
  29. Chen, Y.; Song, J.J.; Liu, K.Y.; Lin, Y.J.; Yang, X.B. Combined accelerator for attribute reduction: A sample perspective. Math. Probl. Eng. 2020, 2020, 2350627. [Google Scholar] [CrossRef]
  30. Jiang, Z.H.; Liu, K.Y.; Yang, X.B.; Yu, H.L.; Fujita, H.; Qian, Y.H. Accelerator for supervised neighborhood based attribute reduction. Int. J. Approx. Reason. 2020, 119, 122–150. [Google Scholar] [CrossRef]
  31. Xu, S.P.; Yang, X.B.; Yu, H.L.; Yu, D.J.; Yang, J.Y.; Tsang, E.C.C. Multi-label learning with label-specific feature reduction. Knowl. Based Syst. 2016, 104, 52–61. [Google Scholar] [CrossRef]
  32. Wu, W.Z. Attribute reduction based on evidence theory in incomplete decision systems. Inf. Sci. 2008, 178, 1355–1371. [Google Scholar] [CrossRef]
  33. Quafafou, M. α-RST: A generalization of rough set theory. Inf. Sci. 2000, 124, 301–316. [Google Scholar] [CrossRef]
  34. Skowron, A.; Rauszer, C. The discernibility matrices and functions in information systems. In Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory; Springer: Dordrecht, The Netherlands, 1992; Volume 11, pp. 331–362. [Google Scholar]
  35. Zhang, W.X.; Wei, L.; Qi, J.J. Attribute reduction theory and approach to concept lattice. Sci. China F Inf. Sci. 2005, 48, 713–726. [Google Scholar] [CrossRef]
  36. Yan, W.W.; Chen, Y.; Shi, J.L.; Yu, H.L.; Yang, X.B. Ensemble and quick strategy for searching Reduct: A hybrid mechanism. Information 2021, 12, 25. [Google Scholar] [CrossRef]
  37. Xia, S.Y.; Zhang, Z.; Li, W.H.; Wang, G.Y.; Giem, E.; Chen, Z.Z. GBNRS: A Novel rough set algorithm for fast adaptive attribute reduction in classification. IEEE Trans. Knowl. Data Eng. 2020, 34, 1231–1242. [Google Scholar] [CrossRef]
  38. Yao, Y.Y.; Zhao, Y.; Wang, J. On reduct construction algorithms. Trans. Comput. Sci. II 2008, 5150, 100–117. [Google Scholar]
  39. Qian, Y.H.; Wang, Q.; Cheng, H.H.; Liang, J.Y.; Dang, C.Y. Fuzzy-rough feature selection accelerator. Fuzzy Sets Syst. 2015, 258, 61–78. [Google Scholar] [CrossRef]
  40. Yang, X.B.; Qi, Y.; Yu, H.L.; Song, X.N.; Yang, J.Y. Updating multigranulation rough approximations with increasing of granular structures. Knowl. Based Syst. 2014, 64, 59–69. [Google Scholar] [CrossRef]
  41. Liu, K.Y.; Yang, X.B.; Fujita, H.; Liu, D.; Yang, X.; Qian, Y.H. An efficient selector for multi-granularity attribute reduction. Inf. Sci. 2019, 505, 457–472. [Google Scholar] [CrossRef]
  42. Jiang, Z.H.; Dou, H.L.; Song, J.J.; Wang, P.X.; Yang, X.B.; Qian, Y.H. Data-guided multi-granularity selector for attribute reduction. Appl. Intell. 2021, 51, 876–888. [Google Scholar] [CrossRef]
  43. Xu, W.H.; Yuan, K.H.; Li, W.T. Dynamic updating approximations of local generalized multigranulation neighborhood rough set. Appl. Intell. 2022. [Google Scholar] [CrossRef]
  44. Ba, J.; Liu, K.Y.; Ju, H.R.; Xu, S.P.; Xu, T.H.; Yang, X.B. Triple-G: A new MGRS and attribute reduction. Int. J. Mach. Learn. Cybern. 2022, 13, 337–356. [Google Scholar] [CrossRef]
  45. Rao, X.S.; Yang, X.B.; Yang, X.; Chen, X.J.; Liu, D.; Qian, Y.H. Quickly calculating reduct: An attribute relationship based approach. Knowl. Based Syst. 2020, 200, 106041. [Google Scholar] [CrossRef]
  46. Borah, P.; Gupta, D. Functional iterative approaches for solving support vector classification problems based on generalized Huber loss. Neural Comput. Appl. 2020, 32, 9245–9265. [Google Scholar] [CrossRef]
  47. Borah, P.; Gupta, D. Unconstrained convex minimization based implicit Lagrangian twin extreme learning machine for classification (ULTELMC). Appl. Intell. 2020, 50, 1327–1344. [Google Scholar] [CrossRef]
  48. Adhikary, D.D.; Gupta, D. Applying over 100 classifiers for churn prediction in telecom companies. Multimed. Tools Appl. 2021, 80, 35123–35144. [Google Scholar] [CrossRef]
  49. Zhou, H.F.; Wang, X.Q.; Zhu, R.R. Feature selection based on mutual information with correlation coefficient. Appl. Intell. 2021. [Google Scholar] [CrossRef]
  50. Karakatič, S. EvoPreprocess-Data preprocessing gramework with nature-Inspired optimization algorithms. Mathematics 2020, 8, 900. [Google Scholar] [CrossRef]
  51. Karakatič, S.; Fister, I.; Fister, D. Dynamic genotype reduction for narrowing the feature selection search Space. In Proceedings of the 2020 IEEE 20th International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 5–7 November 2020; pp. 35–38. [Google Scholar]
  52. Yan, D.W.; Chi, G.T.; Lai, K.K. Financial distress prediction and feature selection in multiple periods by lassoing unconstrained distributed lag non-linear models. Mathematics 2020, 8, 1275. [Google Scholar] [CrossRef]
Figure 1. The framework of beam-influenced selector (BIS).
Figure 1. The framework of beam-influenced selector (BIS).
Mathematics 10 00553 g001
Table 1. Data sets description.
Table 1. Data sets description.
IDData SetsSamplesAttributesDecision Classes
1Breast Cancer Wisconsin (Diagnostic)569302
2Breast Tissue10696
3Congressional Voting Records435162
4Forest Type Mapping523274
5Ionosphere315342
6LSVT Voice Rehabilitation1262562
7Lymphography98183
8Madelon26005002
9Musk (Version 1)4761662
10Parkinsons195237
11Parkinson Speech Dataset with Multiple Types of Sound Recordings1208262
12QSAR biodegradation1055412
13Sonar208602
14Statlog (Heart)270132
15Synthetic Control Chart Time Series600606
16Wine178133
Table 2. The stabilities of reducts (raw data).
Table 2. The stabilities of reducts (raw data).
IDAGARDARDGARESARFBGSARBISAR
10.51160.64660.64460.52950.58300.7501
20.77080.80250.83360.93290.78820.9522
30.42590.68500.40910.63780.41160.6158
40.61500.74080.63060.74120.63160.8098
50.28070.39780.36690.46080.33400.6511
60.11030.21940.11330.13800.11330.6188
70.38680.60860.46200.58040.37900.7739
80.19380.52970.25580.16170.28630.2903
90.15390.37900.18350.16440.17200.4745
100.64100.75200.67320.78080.70150.9089
110.73430.87040.79370.77950.78970.8321
120.61750.82380.75840.73400.75040.8561
130.11760.29810.13600.17730.13600.4064
140.62710.79770.78850.71370.72790.8033
150.24420.41870.33430.52370.35030.3575
160.44480.62890.49400.52290.58210.6984
avg0.42970.59990.49240.53620.48350.6750
Table 3. The stabilities of reducts (label noise).
Table 3. The stabilities of reducts (label noise).
IDAGARDARDGARESARFBGSARBISAR
10.58480.75550.62810.61930.62620.8081
20.76360.83020.78450.92340.78160.9458
30.71140.73850.70820.72280.71900.8988
40.65670.79370.67030.72920.68280.8493
50.46690.55290.45090.59550.46650.8205
60.06000.19260.06500.08380.07920.5738
70.38400.58540.44740.48090.46480.8300
80.15770.49500.21190.13100.20230.2668
90.10660.32400.13450.14940.13720.5461
100.63410.75880.70090.80620.67790.9239
110.71930.84770.76750.77730.77470.8538
120.62790.79810.69760.71880.69610.8670
130.13420.31350.14340.18140.15850.4770
140.69350.81110.76750.77880.78810.8927
150.16790.38610.25220.38400.25970.3887
160.57380.69750.58540.63950.57680.8134
avg0.46520.61750.50100.54510.50570.7347
Table 4. The stabilities of reducts (feature noise).
Table 4. The stabilities of reducts (feature noise).
IDAGARDARDGARESARFBGSARBISAR
10.42970.58410.53160.48260.48320.7231
20.67880.75230.72520.87470.69400.9172
30.33730.57950.36020.50810.38100.5808
40.54100.73130.56290.68560.59010.7971
50.20810.29600.25290.37710.24450.5329
60.07870.19380.08510.10700.11270.6256
70.33590.53440.39560.54490.39180.7752
80.09880.45050.11160.05720.10120.1652
90.09060.30710.11040.12550.11040.4379
100.55750.69770.60980.73750.59880.8958
110.63980.82580.72980.72220.72090.7797
120.53950.75640.64360.66420.64980.8194
130.10010.23040.11140.11650.11480.4163
140.57400.72040.65930.65680.65000.7690
150.16590.35500.22810.47770.21680.3288
160.36410.55850.38890.47360.41650.6743
avg0.35880.53580.40670.47570.40480.6399
Table 5. p-values for comparing stabilities (raw data).
Table 5. p-values for comparing stabilities (raw data).
IDAGAR
and BISAR
DAR
and BISAR
DGAR
and BISAR
ESAR
and BISAR
FBGSAR
and BISAR
1 1.7815 × 10 3 5.3081 ×   10 2 7.6417 ×   10 2 1.5479 ×   10 2 4.9864 ×   10 2
23.9832 ×   10 6 6.7574 ×   10 8 4.6827 ×   10 10 8.0126 ×   10 1 8.1705 ×   10 5
36.6815 ×   10 8 7.8760 ×   10 8 7.8321 ×   10 9 5.4804 ×   10 5 7.9626 ×   10 9
49.7384 ×   10 3 3.6480 ×   10 1 1.0530 ×   10 2 9.3183 ×   10 2 6.0143 ×   10 3
52.2178 ×   10 7 6.7765 ×   10 8 7.5774 ×   10 6 3.6372 ×   10 3 3.2931 ×   10 5
66.0148 ×   10 7 6.7765 ×   10 8 9.1266 ×   10 7 7.5774 ×   10 6 2.6898 ×   10 6
76.7478 ×   10 8 3.4070 ×   10 7 2.0047 ×   10 7 5.2376 ×   10 4 6.1004 ×   10 8
82.3413 ×   10 3 6.7765 ×   10 8 3.2348 ×   10 1 5.0907 ×   10 4 5.2499 ×   10 1
92.0616 ×   10 6 6.7765 ×   10 8 6.6737 ×   10 6 2.0616 ×   10 6 4.5401 ×   10 6
107.8552 ×   10 5 8.1003 ×   10 2 7.0110 ×   10 5 2.9521 ×   10 3 2.1571 ×   10 4
111.3303 ×   10 2 4.3165 ×   10 3 2.7454 ×   10 2 1.6648 ×   10 2 2.0711 ×   10 2
121.9943 ×   10 4 3.1036 ×   10 1 4.6986 ×   10 3 2.2273 ×   10 3 7.1081 ×   10 3
139.1266 ×   10 7 6.7765 ×   10 8 2.3557 ×   10 6 3.7051 ×   10 5 9.1266 ×   10 7
141.1590 ×   10 4 1.1982 ×   10 1 6.4561 ×   10 1 2.9223 ×   10 5 6.2163 ×   10 4
152.7986 ×   10 3 6.7765 ×   10 8 4.2488 ×   10 1 5.5605 ×   10 3 2.1841 ×   10 1
161.1582 ×   10 4 1.6236 ×   10 3 1.2265 ×   10 3 1.0373 ×   10 4 1.7824 ×   10 3
Table 6. p-values for comparing stabilities (label noise).
Table 6. p-values for comparing stabilities (label noise).
IDAGAR
and BISAR
DAR
and BISAR
DGAR
and BISAR
ESAR
and BISAR
FBGSAR
and BISAR
15.9907 ×   10 3 1.7500 ×   10 2 1.5377 ×   10 2 9.7693 ×   10 3 8.3396 ×   10 3
22.9400 ×   10 6 1.0400 ×   10 2 3.6085 ×   10 7 4.9537 ×   10 1 1.2440 ×   10 6
36.7669 ×   10 8 6.7600 ×   10 8 6.7669 ×   10 8 1.3731 ×   10 2 3.3273 ×   10 3
41.7612 ×   10 3 4.6500 ×   10 2 2.3147 ×   10 3 6.2200 ×   10 2 2.0407 ×   10 5
55.2507 ×   10 5 1.4400 ×   10 4 3.7020 ×   10 5 1.6571 ×   10 7 1.9177 ×   10 7
64.5390 ×   10 7 3.7500 ×   10 4 3.4156 ×   10 7 1.0646 ×   10 7 9.1728 ×   10 8
76.7956 ×   10 8 2.9400 ×   10 7 1.2346 ×   10 7 2.9249 ×   10 5 1.6669 ×   10 2
81.2941 ×   10 4 9.1300 ×   10 7 4.6792 ×   10 2 6.9166 ×   10 7 6.9166 ×   10 7
91.2346 ×   10 7 5.9000 ×   10 5 5.2269 ×   10 7 1.7933 ×   10 2 1.4359 ×   10 2
101.3304 ×   10 5 9.8700 ×   10 5 2.3045 ×   10 5 1.0113 ×   10 3 4.1420 ×   10 4
117.7061 ×   10 3 2.2900 ×   10 1 1.6659 ×   10 2 3.7020 ×   10 5 7.5699 ×   10 6
121.2922 ×   10 4 1.1400 ×   10 2 6.2126 ×   10 4 9.2428 ×   10 5 2.7360 ×   10 4
133.4958 ×   10 6 3.1500 ×   10 2 1.1045 ×   10 5 7.9720 ×   10 1 2.1393 ×   10 3
146.4965 ×   10 5 8.6200 ×   10 3 4.5414 ×   10 4 1.1829 ×   10 2 5.0924 ×   10 3
159.7480 ×   10 6 7.5600 ×   10 1 3.9662 ×   10 3 6.7193 ×   10 8 6.7193 ×   10 8
163.9425 ×   10 3 4.1000 ×   10 2 3.7796 ×   10 3 3.1556 ×   10 3 5.5013 ×   10 5
Table 7. p-values for comparing stabilities (feature noise).
Table 7. p-values for comparing stabilities (feature noise).
IDAGAR
and BISAR
DAR
and BISAR
DGAR
and BISAR
ESAR
and BISAR
FBGSAR
and BISAR
15.0907 ×   10 4 6.7900 ×   10 2 2.2270 ×   10 2 2.2270 ×   10 2 3.1517 ×   10 2
22.5837 ×   10 4 3.3700 ×   10 2 1.4726 ×   10 3 8.7024 ×   10 1 7.2803 ×   10 5
33.4156 ×   10 7 9.0300 ×   10 1 1.2009 ×   10 6 2.3883 ×   10 2 4.3147 ×   10 3
43.6309 ×   10 3 1.0200 ×   10 2 5.1049 ×   10 3 4.3202 ×   10 3 1.1045 ×   10 5
55.2269 ×   10 7 7.4100 ×   10 5 8.5974 ×   10 6 2.9598 ×   10 7 3.6636 ×   10 7
66.7860 ×   10 8 5.8700 ×   10 6 6.7860 ×   10 8 1.2009 ×   10 6 6.7956 ×   10 8
76.7956 ×   10 8 1.6600 ×   10 7 6.7956 ×   10 8 1.8074 ×   10 5 1.2345 ×   10 2
83.6048 ×   10 2 6.8000 ×   10 8 8.1032 ×   10 2 1.3761 ×   10 6 9.1266 ×   10 7
96.7956 ×   10 8 3.3700 ×   10 2 6.7956 ×   10 8 1.3312 ×   10 2 1.5469 ×   10 2
101.2203 ×   10 5 1.5800 ×   10 4 4.5815 ×   10 5 7.7089 ×   10 3 3.0553 ×   10 3
117.7118 ×   10 3 2.8500 ×   10 1 3.1517 ×   10 2 1.1045 ×   10 5 9.7480 ×   10 6
123.3798 ×   10 4 6.0100 ×   10 2 4.3184 ×   10 3 3.0480 ×   10 4 1.6098 ×   10 4
131.2009 ×   10 6 2.5600 ×   10 3 3.0691 ×   10 6 3.6388 ×   10 3 1.7824 ×   10 3
142.9249 ×   10 5 1.4000 ×   10 2 1.9533 ×   10 3 1.4149 ×   10 5 9.1266 ×   10 7
157.5788 ×   10 4 5.7900 ×   10 1 1.7939 ×   10 2 1.0141 ×   10 3 2.9598 ×   10 7
161.8074 ×   10 5 3.6000 ×   10 2 2.9249 ×   10 5 6.7470 ×   10 3 1.2365 ×   10 5
Table 8. Classification accuracies based on the KNN classifier (raw data).
Table 8. Classification accuracies based on the KNN classifier (raw data).
IDAGARDARDGARESARFBGSARBISAR
10.97470.94010.97650.97820.97460.9770
20.62910.61520.62730.64410.66000.6364
30.90450.95630.91720.86440.87820.9084
40.85770.84940.85710.87800.87440.8807
50.90970.87410.91410.88830.88270.9070
60.82120.83560.81320.84920.81320.8144
70.75800.75400.77000.68110.70470.7330
80.60150.63440.59940.57760.60780.5842
90.77020.72870.76510.71820.73330.8013
100.08670.07030.08920.06310.07510.0846
110.65480.68950.65570.66590.66830.6621
120.84390.82040.83660.83500.83700.8498
130.74000.69630.73510.75340.73510.8176
140.79720.74130.80700.78090.79170.7846
150.84430.79090.80790.64780.77940.7515
160.91940.95060.93060.92430.94260.8954
avg0.75710.74670.75640.73430.74740.7555
Table 9. Classification accuracies based on the KNN classifier (label noise).
Table 9. Classification accuracies based on the KNN classifier (label noise).
IDAGARDARDGARESARFBGSARBISAR
10.90820.87680.90880.89870.90360.9069
20.49140.57860.49410.58050.58320.4823
30.84410.86130.84140.79340.78600.8536
40.81690.80430.80980.82590.82000.8249
50.83930.80760.84400.83070.81790.8400
60.68440.71680.69200.72280.69840.6848
70.69250.69650.71550.68630.67840.7120
80.52850.56840.52390.52110.52450.5212
90.68860.67510.68330.66960.67330.7192
100.09410.07640.09690.09030.09900.0887
110.61930.63830.62110.62660.62480.6243
120.77360.75250.77220.76100.76410.7719
130.66460.62510.65100.66630.66100.7090
140.72630.70200.72720.71720.72610.7215
150.75480.71470.71630.61410.67420.6539
160.86890.91610.86740.85600.87630.8360
avg0.68720.68820.68530.67880.68190.6844
Table 10. Classification accuracies based on the KNN classifier (feature noise).
Table 10. Classification accuracies based on the KNN classifier (feature noise).
IDAGARDARDGARESARFBGSARBISAR
10.97340.93600.97320.97420.97410.9733
20.57500.59430.56820.57320.57820.5677
30.90610.95860.90530.88110.89100.8991
40.85310.84410.85330.87400.86810.8667
50.90490.86010.90740.88100.86900.8996
60.81080.82360.81920.82920.83160.8040
70.75050.73650.75800.70260.71210.7250
80.60790.62460.61600.56630.60690.5781
90.73980.71170.73310.70180.70780.7682
100.08790.07590.09150.07870.08970.0856
110.64650.67860.65030.66380.66490.6514
120.83370.80840.82940.82640.82610.8334
130.72150.68050.71460.72170.72120.7873
140.78890.74500.79480.78020.78700.7774
150.82340.77230.79880.60320.75840.7286
160.91740.94640.92200.91630.93510.8886
avg0.74630.73730.74590.72340.73880.7396
Table 11. Classification accuracies based on SVM classifier (raw data).
Table 11. Classification accuracies based on SVM classifier (raw data).
IDAGARDARDGARESARFBGSARBISAR
10.97440.94190.97520.98540.98480.9779
20.58950.49520.59090.44950.44680.5909
30.90900.95400.91030.90570.91260.9080
40.84040.84910.84170.88050.87580.8555
50.88530.84410.89070.87200.86070.8886
60.85040.87200.85560.86960.85560.8148
70.80000.74100.81250.71890.71950.8595
80.57990.56060.57680.57240.58240.5775
90.70200.66870.70000.60440.62370.6635
100.04690.07180.04870.09410.09180.0269
110.61070.65270.61120.66890.66920.6124
120.82700.80920.81900.81760.81240.8265
130.68880.66800.70340.71120.70340.7456
140.85930.76910.85870.80850.81390.8407
150.84700.81380.83680.72830.83130.8183
160.93860.94890.94290.92030.94970.9203
avg0.74680.72880.74840.72550.73340.7454
Table 12. Classification accuracies based on SVM classifier (label noise).
Table 12. Classification accuracies based on SVM classifier (label noise).
IDAGARDARDGARESARFBGSARBISAR
10.96020.91970.96070.97070.97060.9557
20.40910.46570.40770.40410.40090.4059
30.89100.91020.88760.88850.86600.9049
40.82780.84590.82420.86480.85820.8354
50.87360.82400.86760.84340.82610.8834
60.74760.81160.74120.77800.73960.7200
70.75500.72100.76450.69740.71420.7860
80.55570.55190.55120.54490.55390.5461
90.65980.65950.66190.58370.58190.6415
100.08150.11540.09130.11790.11740.0756
110.60420.64920.60340.66230.66280.6045
120.78890.78730.78630.78700.79290.7866
130.63000.60730.62930.63100.64320.6488
140.80700.75700.80690.76700.77070.7967
150.80310.76310.78880.69820.76270.7446
160.92800.94810.93090.90340.92110.9014
avg0.70770.70860.70650.69640.69890.7023
Table 13. Classification accuracies based on SVM classifier (feature noise).
Table 13. Classification accuracies based on SVM classifier (feature noise).
IDAGARDARDGARESARFBGSARBISAR
10.73900.66400.71880.79560.79020.7738
20.33770.40050.34410.37410.34270.3882
30.69710.62140.69530.57970.56370.7218
40.53370.58000.52840.64230.60840.6119
50.76540.65210.76500.67230.65170.7861
60.69440.80160.69600.70800.69760.7720
70.63100.59450.60450.63680.61630.7425
80.50040.48030.50240.48820.48950.4994
90.58950.58940.58950.48420.48420.5894
100.07100.10230.07540.11790.12080.0600
110.59090.59750.59090.58920.58920.5909
120.67770.66820.67750.63030.63030.6776
130.60200.59100.58900.57950.57660.5971
140.64110.66630.64240.53650.55110.6296
150.43450.46010.40270.32830.38360.3737
160.56570.57030.54830.54290.51800.5697
avg0.56700.56500.56060.54410.53840.5865
Table 14. p-values for comparing classification accuracies (raw data).
Table 14. p-values for comparing classification accuracies (raw data).
ID AGAR
and BISAR
DAR
and BISAR
DGAR
and BISAR
ESAR
and BISAR
FBGSAR
and BISAR
1KNN2.0644 ×   10 1 5.2573 ×   10 8 2.1889 ×   10 1 9.8911 ×   10 1 1.6004 ×   10 1
SVM2.5723 ×   10 1 6.0814 ×   10 8 3.9437 ×   10 1 1.2039 ×   10 3 1.0070 ×   10 4
2KNN3.7135 ×   10 7 4.8863 ×   10 5 4.6827 ×   10 10 4.6921 ×   10 2 3.1837 ×   10 3
SVM1.5427 ×   10 9 5.2465 ×   10 9 4.6827 ×   10 10 6.6519 ×   10 9 7.2475 ×   10 9
3KNN3.6985 ×   10 2 6.8796 ×   10 9 6.8796 ×   10 9 6.8796 ×   10 9 6.8796 ×   10 9
SVM3.3772 ×   10 2 4.6827 ×   10 10 4.6827 ×   10 10 4.6827 ×   10 10 4.6827 ×   10 10
4KNN2.8056 ×   10 5 1.1806 ×   10 6 1.0481 ×   10 5 3.9018 ×   10 1 3.2251 ×   10 2
SVM2.0648 ×   10 3 3.5596 ×   10 1 8.5592 ×   10 4 3.9834 ×   10 4 9.0919 ×   10 4
5KNN5.2376 ×   10 1 1.3910 ×   10 7 6.3108 ×   10 2 6.6904 ×   10 5 1.9399 ×   10 6
SVM6.1545 ×   10 1 2.8044 ×   10 4 6.9196 ×   10 2 1.8908 ×   10 3 4.9346 ×   10 4
6KNN6.1605 ×   10 1 3.4279 ×   10 1 8.3896 ×   10 1 9.5712 ×   10 2 8.3896 ×   10 1
SVM1.7159 ×   10 1 3.1024 ×   10 2 1.4745 ×   10 1 1.0865 ×   10 2 1.4745 ×   10 1
7KNN1.9103 ×   10 2 1.0344 ×   10 1 4.7130 ×   10 7 6.2748 ×   10 4 3.1920 ×   10 4
SVM2.0188 ×   10 4 6.9204 ×   10 8 6.6483 ×   10 4 4.4311 ×   10 8 1.2975 ×   10 7
8KNN2.7326 ×   10 1 1.7193 ×   10 1 2.1841 ×   10 1 5.4271 ×   10 1 9.0892 ×   10 2
SVM2.5021 ×   10 1 1.3308 ×   10 2 3.2344 ×   10 1 7.2508 ×   10 1 1.2307 ×   10 1
9KNN1.4860 ×   10 2 1.0259 ×   10 5 5.0986 ×   10 3 1.1885 ×   10 6 1.0278 ×   10 5
SVM8.0246 ×   10 3 7.7617 ×   10 1 2.2257 ×   10 2 1.2221 ×   10 4 3.6325 ×   10 3
10KNN9.8914 ×   10 1 7.1531 ×   10 4 4.8874 ×   10 1 2.8783 ×   10 4 1.2076 ×   10 2
SVM1.5778 ×   10 7 1.8814 ×   10 8 1.2125 ×   10 7 1.8659 ×   10 8 1.8443 ×   10 8
11KNN8.6173 ×   10 4 1.3389 ×   10 6 7.9774 ×   10 3 9.5978 ×   10 2 4.6637 ×   10 2
SVM4.5248 ×   10 1 7.6986 ×   10 7 6.4350 ×   10 1 5.6611 ×   10 8 5.5477 ×   10 8
12KNN2.9258 ×   10 2 3.8833 ×   10 4 8.9614 ×   10 3 5.2758 ×   10 3 8.6188 ×   10 3
SVM8.6024 ×   10 1 7.4837 ×   10 4 9.8615 ×   10 2 9.1377 ×   10 4 6.8006 ×   10 4
13KNN1.7761 ×   10 4 7.8634 ×   10 7 1.0860 ×   10 4 1.7656 ×   10 3 1.0860 ×   10 4
SVM8.6270 ×   10 4 1.3444 ×   10 4 5.9977 ×   10 3 3.5839 ×   10 2 5.9977 ×   10 3
14KNN2.7905 ×   10 1 2.8985 ×   10 3 7.6348 ×   10 2 6.9469 ×   10 1 2.4437 ×   10 1
SVM4.8118 ×   10 2 2.7764 ×   10 6 1.3714 ×   10 2 1.4023 ×   10 1 2.5560 ×   10 1
15KNN6.2426 ×   10 6 8.1017 ×   10 2 3.2047 ×   10 4 2.6753 ×   10 6 5.6459 ×   10 2
SVM7.6417 ×   10 2 6.0715 ×   10 1 2.7909 ×   10 1 1.7788 ×   10 3 3.3682 ×   10 1
16KNN1.3842 ×   10 3 2.5667 ×   10 5 1.5250 ×   10 5 2.6838 ×   10 4 3.2508 ×   10 7
SVM2.3925 ×   10 2 9.1071 ×   10 4 5.0916 ×   10 3 9.3508 ×   10 1 1.7683 ×   10 4
Table 15. p-values for comparing classification accuracies (label noise).
Table 15. p-values for comparing classification accuracies (label noise).
ID AGAR
and BISAR
DAR
and BISAR
DGAR
and BISAR
ESAR
and BISAR
FBGSAR
and BISAR
1KNN9.6758 ×   10 1 2.5577 ×   10 5 7.6583 ×   10 1 3.8397 ×   10 2 2.8503 ×   10 1
SVM3.4925 ×   10 1 1.0400 ×   10 6 3.2929 ×   10 1 1.5927 ×   10 4 1.0273 ×   10 4
2KNN6.8455 ×   10 1 4.3762 ×   10 2 7.0463 ×   10 1 4.0877 ×   10 2 2.2126 ×   10 2
SVM8.9230 ×   10 1 2.2852 ×   10 1 9.5680 ×   10 1 8.8155 ×   10 1 7.0451 ×   10 1
3KNN3.9625 ×   10 2 2.9725 ×   10 1 2.9281 ×   10 2 2.0463 ×   10 7 6.7193 ×   10 8
SVM3.8144 ×   10 4 1.5830 ×   10 1 8.4363 ×   10 6 1.4276 ×   10 2 8.8909 ×   10 7
4KNN2.5016 ×   10 1 1.0159 ×   10 2 3.7157 ×   10 2 4.7338 ×   10 1 6.1668 ×   10 1
SVM2.7268 ×   10 1 2.2176 ×   10 2 8.0775 ×   10 2 1.2155 ×   10 3 4.6850 ×   10 3
5KNN9.7838 ×   10 1 8.1514 ×   10 4 8.2844 ×   10 1 2.1257 ×   10 1 9.3528 ×   10 3
SVM2.7204 ×   10 1 1.6557 ×   10 6 1.5516 ×   10 1 8.1531 ×   10 5 8.3383 ×   10 7
6KNN9.8919 ×   10 1 2.7375 ×   10 2 6.4524 ×   10 1 1.9909 ×   10 2 3.9377 ×   10 1
SVM9.0453 ×   10 2 1.0315 ×   10 7 1.2586 ×   10 1 3.1860 ×   10 4 1.2260 ×   10 1
7KNN9.2983 ×   10 2 3.5002 ×   10 1 8.4958 ×   10 1 4.6513 ×   10 2 5.6343 ×   10 2
SVM7.3803 ×   10 3 1.8509 ×   10 5 2.3371 ×   10 2 1.3708 ×   10 5 4.0766 ×   10 5
8KNN3.5059 ×   10 1 5.4754 ×   10 2 1.0000E+009.0308 ×   10 1 8.3920 ×   10 1
SVM9.0892 ×   10 2 2.7923 ×   10 1 3.9408 ×   10 1 8.3923 ×   10 1 1.8054 ×   10 1
9KNN3.0484 ×   10 3 2.5975 ×   10 4 9.1734 ×   10 4 7.3834 ×   10 5 1.4397 ×   10 4
SVM8.8203 ×   10 2 7.6362 ×   10 2 9.0786 ×   10 2 5.2252 ×   10 5 1.0909 ×   10 4
10KNN4.6370 ×   10 1 5.2357 ×   10 2 2.7107 ×   10 1 9.1346 ×   10 1 1.2847 ×   10 1
SVM3.8501 ×   10 1 4.6004 ×   10 5 5.1025 ×   10 2 8.0404 ×   10 5 4.3220 ×   10 5
11KNN1.2970 ×   10 1 6.2163 ×   10 4 4.6488 ×   10 1 5.4273 ×   10 1 9.8921 ×   10 1
SVM8.4962 ×   10 1 3.3218 ×   10 7 8.2833 ×   10 1 6.6344 ×   10 8 6.6344 ×   10 8
12KNN5.9769 ×   10 1 5.3213 ×   10 4 8.6035 ×   10 1 1.5152 ×   10 1 1.0743 ×   10 1
SVM4.2473 ×   10 1 4.8179 ×   10 1 7.6597 ×   10 1 4.9023 ×   10 1 3.5743 ×   10 1
13KNN1.0142 ×   10 2 1.3953 ×   10 5 1.7673 ×   10 3 6.2598 ×   10 3 4.6850 ×   10 3
SVM2.9740 ×   10 1 3.7131 ×   10 2 1.7173 ×   10 1 3.0999 ×   10 1 6.3558 ×   10 1
14KNN5.7874 ×   10 1 2.3835 ×   10 1 2.3886 ×   10 1 8.6031 ×   10 1 1.9816 ×   10 1
SVM1.5925 ×   10 1 1.1591 ×   10 3 2.6119 ×   10 1 4.6492 ×   10 1 5.2485 ×   10 1
15KNN2.0334 ×   10 5 1.3479 ×   10 3 1.2893 ×   10 4 1.6770 ×   10 1 2.5585 ×   10 1
SVM1.1865 ×   10 2 3.7202 ×   10 1 5.1451 ×   10 2 5.3378 ×   10 1 3.1685 ×   10 1
16KNN1.0943 ×   10 2 5.1395 ×   10 6 1.2186 ×   10 2 2.2854 ×   10 1 6.2524 ×   10 3
SVM9.5507 ×   10 3 4.5217 ×   10 4 1.5728 ×   10 2 7.1467 ×   10 1 8.2662 ×   10 2
Table 16. p-values for comparing classification accuracies (feature noise).
Table 16. p-values for comparing classification accuracies (feature noise).
ID AGAR
and BISAR
DAR
and BISAR
DGAR
and BISAR
ESAR
and BISAR
FBGSAR
and BISAR
1KNN7.0398 ×   10 1 6.5411 ×   10 8 9.1365 ×   10 1 5.0717 ×   10 1 4.8974 ×   10 1
SVM1.5925 ×   10 1 4.6673 ×   10 4 1.2498 ×   10 1 3.0531 ×   10 1 4.6977 ×   10 1
2KNN4.5642 ×   10 1 1.5516 ×   10 1 8.9223 ×   10 1 8.4943 ×   10 1 5.7795 ×   10 1
SVM1.7171 ×   10 1 7.7621 ×   10 1 2.1819 ×   10 1 8.4971 ×   10 1 4.1681 ×   10 1
3KNN3.5397 ×   10 3 6.3219 ×   10 8 1.1612 ×   10 2 5.6754 ×   10 6 6.3220 ×   10 2
SVM6.8154 ×   10 2 8.2261 ×   10 7 4.9579 ×   10 2 1.0394 ×   10 7 2.3960 ×   10 8
4KNN1.3036 ×   10 2 1.7477 ×   10 5 1.0307 ×   10 3 1.0418 ×   10 2 3.6405 ×   10 1
SVM9.0967 ×   10 2 4.4830 ×   10 1 9.0967 ×   10 2 3.9382 ×   10 1 9.4603 ×   10 1
5KNN1.6233 ×   10 1 7.0537 ×   10 8 3.3053 ×   10 2 2.0454 ×   10 5 7.6509 ×   10 7
SVM7.3434 ×   10 2 3.5158 ×   10 8 3.7827 ×   10 2 9.5953 ×   10 7 3.0538 ×   10 8
6KNN7.0443 ×   10 1 3.0340 ×   10 1 3.4998 ×   10 1 1.9289 ×   10 1 1.9333 ×   10 1
SVM3.4680 ×   10 3 5.4426 ×   10 1 6.1335 ×   10 3 2.3396 ×   10 2 5.4144 ×   10 3
7KNN2.4799 ×   10 2 3.0220 ×   10 1 5.5464 ×   10 3 6.7339 ×   10 2 2.2780 ×   10 1
SVM2.8125 ×   10 6 2.4634 ×   10 6 8.9355 ×   10 7 2.0081 ×   10 5 5.8136 ×   10 7
8KNN7.6403 ×   10 2 3.7933 ×   10 1 1.4364 ×   10 2 7.6390 ×   10 2 2.6533 ×   10 2
SVM4.6566 ×   10 1 4.7933 ×   10 8 1.0859 ×   10 1 1.1641 ×   10 6 1.0113 ×   10 6
9KNN1.3807 ×   10 2 4.8136 ×   10 6 4.8708 ×   10 3 2.3429 ×   10 6 3.9666 ×   10 6
SVM1.0000 ×   10 0 6.1469 ×   10 1 1.0000 ×   10 0 1.1052 ×   10 9 1.1052 ×   10 9
10KNN6.7298 ×   10 1 1.1530 ×   10 1 3.7819 ×   10 1 3.2856 ×   10 1 7.1431 ×   10 1
SVM7.0818 ×   10 2 2.3496 ×   10 7 2.9515 ×   10 3 2.2516 ×   10 7 1.8580 ×   10 7
11KNN1.5929 ×   10 1 5.1822 ×   10 7 5.9774 ×   10 1 6.5521 ×   10 3 1.9495 ×   10 3
SVM1.0344 ×   10 1 1.4745 ×   10 1 2.1841 ×   10 1 2.8500 ×   10 1 2.1841 ×   10 1
12KNN7.8647 ×   10 1 3.3646 ×   10 4 4.2449 ×   10 1 2.9079 ×   10 1 1.6734 ×   10 1
SVM7.7617 ×   10 1 5.2499 ×   10 1 3.2348 ×   10 1 6.7298 ×   10 1 4.2488 ×   10 1
13KNN8.6853 ×   10 5 8.8203 ×   10 7 2.4109 ×   10 5 6.8995 ×   10 5 1.8640 ×   10 4
SVM5.5521 ×   10 1 8.8090 ×   10 1 8.5793 ×   10 1 2.6073 ×   10 1 2.1336 ×   10 1
14KNN1.3292 ×   10 1 2.6390 ×   10 3 4.5064 ×   10 2 4.4819 ×   10 1 1.9365 ×   10 1
SVM7.4115 ×   10 1 1.6209 ×   10 1 6.2268 ×   10 1 6.5337 ×   10 3 9.1050 ×   10 3
15KNN7.5328 ×   10 6 3.2544 ×   10 2 5.5354 ×   10 5 8.0248 ×   10 6 2.9395 ×   10 2
SVM4.2488 ×   10 1 1.0751 ×   10 1 5.4275 ×   10 1 2.5027 ×   10 1 8.0763 ×   10 1
16KNN9.9096 ×   10 4 7.4222 ×   10 6 2.6588 ×   10 4 4.5521 ×   10 3 1.1188 ×   10 5
SVM9.6750 ×   10 1 7.5551 ×   10 1 9.6755 ×   10 1 8.8125 ×   10 1 5.5859 ×   10 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, W.; Ba, J.; Xu, T.; Yu, H.; Shi, J.; Han, B. Beam-Influenced Attribute Selector for Producing Stable Reduct. Mathematics 2022, 10, 553. https://doi.org/10.3390/math10040553

AMA Style

Yan W, Ba J, Xu T, Yu H, Shi J, Han B. Beam-Influenced Attribute Selector for Producing Stable Reduct. Mathematics. 2022; 10(4):553. https://doi.org/10.3390/math10040553

Chicago/Turabian Style

Yan, Wangwang, Jing Ba, Taihua Xu, Hualong Yu, Jinlong Shi, and Bin Han. 2022. "Beam-Influenced Attribute Selector for Producing Stable Reduct" Mathematics 10, no. 4: 553. https://doi.org/10.3390/math10040553

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop