Next Article in Journal
Impact of Sudden Global Events on Cross-Field Research Cooperation
Previous Article in Journal
A Frequent Pattern Conjunction Heuristic for Rule Generation in Data Streams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ensemble and Quick Strategy for Searching Reduct: A Hybrid Mechanism

1
School of Computer, Jiangsu University of Science and Technology, Zhenjiang 212100, China
2
Intelligent Information Processing Key Laboratory of Shanxi Province, Shanxi University, Taiyuan 030006, China
3
Key Laboratory of Oceanographic Big Data Mining & Application of Zhejiang Province, Zhejiang Ocean University, Zhoushan 316022, China
*
Author to whom correspondence should be addressed.
Information 2021, 12(1), 25; https://doi.org/10.3390/info12010025
Submission received: 10 December 2020 / Revised: 3 January 2021 / Accepted: 7 January 2021 / Published: 10 January 2021

Abstract

:
Attribute reduction is commonly referred to as the key topic in researching rough set. Concerning the strategies for searching reduct, though various heuristics based forward greedy searchings have been developed, most of them were designed for pursuing one and only one characteristic which is closely related to the performance of reduct. Nevertheless, it is frequently expected that a justifiable searching should explicitly involves three main characteristics: (1) the process of obtaining reduct with low time consumption; (2) generate reduct with high stability; (3) acquire reduct with competent classification ability. To fill such gap, a hybrid based searching mechanism is designed, which takes the above characteristics into account. Such a mechanism not only adopts multiple fitness functions to evaluate the candidate attributes, but also queries the distance between attributes for determining whether two or more attributes can be added into the reduct simultaneously. The former may be useful in deriving reduct with higher stability and competent classification ability, and the latter may contribute to the lower time consumption of deriving reduct. By comparing with 5 state-of-the-art algorithms for searching reduct, the experimental results over 20 UCI data sets demonstrate the effectiveness of our new mechanism. This study suggests a new trend of attribute reduction for achieving a balance among various characteristics.

1. Introduction

Attribute reduction [1,2], as one filter feature selection technique emerges in rough set [3,4,5], plays a crucial role in the field of data dimension reduction. Generally speaking, given a constraint, the purpose of attribute reduction is to obtain an appropriate attribute subset through some specific searchings.
In general, if the form of the attribute reduction is fully defined, then how to derive such qualified reduct is the key. Up to now, exhaustion and heuristics based searchings are two frequently used strategies. Though the optimal reduct can be obtained through using exhaustion, the time consumption is frequently too high to be accepted because exhaustion is designed for finding all reducts. For such reason, the heuristics based searching [6,7] has been paid much attention to for its low complexity.
As a poster child of heuristic searching, forward greedy [8] is effective. However, some limitations can also be observed in forward greedy searching. On the one hand, the elapsed time of obtaining reduct may be higher with dramatically increasing volume of data [9]. For instance, if we are facing high-dimensional data [10,11], then for each iteration in the process of forward greedy searching, one and only one attribute is selected and immediately, the times of iterations may be greater. On the other hand, in the processes of most of the forward greedy searchings, each candidate attribute is evaluated based on one and only one fitness function, i.e., one measure with respect to the form of attribute reduction is calculated for each candidate attribute. Obviously, such device is only the single-view [12,13] based evaluation and then it may not be applicable to the stability requirement of selecting attribute.
By considering what has been discussed above and the learning task, it is not difficult to emphasize that a reasonable algorithm for deriving reduct should be equipped with the following important characteristics.
(1)
Low time consumption of deriving reduct. This is the first perspective which should be considered in designing algorithm, especially when large-scale and high-dimensional data appear.
(2)
High stability of derived reduct. A reduct with low stability indicates that such reduct is susceptible if data perubation happens, and then it may be unsuitable for further data processing.
(3)
Competent classification of derived reduct. Attribute reduction can be regarded as an important step of data pre-processing, and then it does expect that the obtained reduct will offer competent performance if the classification task is explored.
Presently, to the best of our knowledge, most of the previous approaches for searching reduct mainly focus on one and only one of the above characteristics. For example, Chen et al. [14] have proposed an attribute group approach for calculating reduct based on the consideration of the relationship among attributes. Such approach consists of two main phases: (1) raw attributes are divided into different groups; (2) in the process of searching reduct, only the attributes in the group contain at least one attribute in the potential reduct should be evaluated. From this point of view, such process can reduce the times of evaluating candidate attributes, it follows that the elapsed time of deriving reduct may be decreased. Though Chen et al.’s attribute group has achieved success for low time consumption of deriving reduct, it may not be suitable for generating reduct with high stability. This is mainly because: (1) in such approach, each candidate attribute is still evaluated based on one and only one fitness function [13,15], which will lead to ignore the distribution of the samples; (2) the groups of attributes strongly depend on the process of K-means, which will result in some degrees of randomness for adding appropriate attributes into the potential reduct.
To overcome the above limitations, a new hybrid mechanism will be developed in this paper, where multiple characteristics are considered simultaneously. Firstly, to obtain the reduct with high stability, the ensemble selector [13,16] will be introduced into our approach, in which each attribute can be fully evaluated with respect to multiple fitness functions. Secondly, it is worth noting that the usage of ensemble selector will imply higher time consumption. Therefore, the dissimilarity relationship among attributes obtained by using the distance between attributes will be further employed, by which multiple different attributes can be selected and added into the potential reduct for each iteration in the process of deriving reduct. This is the core for effectively reducing the time consumption. In additional, following the researches shown in Refs. [13,17], it can be observed that the reducts obtained by both ensemble selector and dissimilarity are frequently equipped with competent generalization performance. For such reason, our hybrid mechanism is also expected to be with justifiable classification ability. The specific details of our mechanism will be shown in the following Figure 1.
(1)
each candidate attribute will be evaluated from different perspectives by using multiple fitness functions;
(2)
an appropriate attribute can be obtained by adopting the mechanism of ensemble selector based on the results of the attribute evaluations;
(3)
one or more attributes, which bear a striking dissimilarity to the attribute obtained in (2), will also be selected;
(4)
more than one attributes can be added into the potential reduct simultaneously.
The main contribution of this research can be summarized as the following aspects: (1) observing that most of the state-of-the-art approaches are designed for pursuing one and only one characteristic which is closely related to the performance of reduct, a hybrid based searching mechanism is proposed to make a trade-off between the stability of derived reduct and the elapsed time of searching reduct; (2) though the neighborhood rough set based reduct is conducted by using the hybrid based searching mechanism in the context of this paper, it worth to point out that our hybrid based searching mechanism is independent of rough set model and then can be employed to any other attribute reduction.
The remainder of this paper is organized as follows. In Section 2, we will review the basic notions related to attribute reduction and some used measures. A new hybrid mechanism for searching reduct will be presented in Section 3. Comparative experimental results and the corresponding analyses will be shown in Section 4. This paper will come to an end with conclusions and future perspectives in Section 5.

2. Preliminaries

2.1. Attribute Reduction

Presently, a variety of definitions of attribute reduction [18,19] have been proposed with respect to different requirements [20,21]. Through extracting the commonness of those definitions, Yao et al. [22], have proposed a general form which is shown in the following Definition 1.
Definition 1.
Given a decision system DS = < U , A T , D > , U is a nonempty finite set of samples, AT is a nonempty finite set of raw attributes and D is a decision attribute. Supposing that ρ-constraint is a constraint based on a considered measure ρ such that ρ : P ( A T ) R ( P ( A T ) is the power set of A T , R is the set of all real numbers), then A A T , A is referred to as a reduct if and only if
(1)
A satisfies the ρ-constraint;
(2)
B A , B does not satisfy the ρ-constraint.
Following Definition 1, the open problem is how to obtain a qualified reduct. As one of the widely used heuristic algorithms, the forward greedy searching [8,23] has been favored by many researchers. The details of such strategy are shown in the following Algorithm 1.
In Algorithm 1, fitness value can be obtained by a fitness function such that ϕ : A T R , it follows that the importance of each attribute can be quantitatively characterized. It must be noticed that the form of fitness function is closely related to the measure ρ used in given constraint. For example, if the constraint is required to preserve the measure of approximation quality, then ϕ ( a ) can be regarded as the variation of the approximation quality [24] if a is added into the pool set.
It is not difficult to reveal that the process of Algorithm 1 contains two main phases. The first phase, adds the qualified attributes into the potential reduct. The second phase, removes redundant attributes from the potential reduct. Obviously, this process fits the two requirements shown in Definition 1. The time complexity of Algorithm 1 is O ( | U | 2 × | A T | 2 ) , where | U | and | A T | denote the numbers of samples and raw attributes, respectively.
Algorithm 1. Forward Greedy Searching (FGS) .
Input: Decision system DS , ρ -constraint and fitness function ϕ .
Output: One reduct A.
Step 1. Calculate the measure-value ρ ( A T ) over the raw attribute set A T ;
Step 2. A = ;
Step 3. Do
               (1) Evaluate each candidate attribute a A T A by calculating ϕ ( a ) ;
               (2) Select a qualified attribute b A T A with the justifiable evaluation;
               (3)  A = A { b } ;
               (4) Calculate ρ ( A ) ;
            Until ρ -constraint is satisfied;
            // Adding the qualified attributes into the potential~reduct
Step 4. Do
               (1)  a A , calculate ρ ( A { a } ) ;
               (2) If ρ -constraint is satisfied
                       A = A { a } ;
                  End
            Until A does not change or | A | =1;
            // Removing redundant attributes from the potential~reduct
Step 5. Return A.

2.2. Stability Measure

Generally speaking, the stability of reduct can be regarded as the sensitivity of the attribute preferences if an algorithm produces differences in training sets drawn from the same generating distribution. Therefore, the stability of reduct can be quantified as the changing degree of reducts if sample disturbance happens.
To quantitatively characterize the concept of the stability, a series of measures [25,26,27,28] has been proposed. Furthermore, to make the comparisons among different measures more reasonable, Nogueira et al. [28] suggested five desirable properties that a stability measure should possess: (1) fully defined; (2) strict monotonicity; (3) bounds; (4) maximum stability; (5) correction for chance. Therefore, with a critical reviewing of the previous stability measures, it is not difficult to observe that only the measures designed by Akashata et al. [27] and Nogueira et al. [28] fully possess the above five properties. Such two measures will be shown in the following Definitions 2 and 3.
Definition 2.
Given a set of reducts Z = { A 1 , A 2 , . . . , A M } , supposing that A T is a raw attribute set, the stability measure proposed by Akashata with respect to Z is defined as:
Φ ^ ( Z ) = 1 M ( M + 1 ) i = 1 M j = 1 , j i M ψ ( A i , A j ) ,
ψ ( A i , A j ) = α I i j | A i | 2 | A T | | A i | | A i | 2 | A T | + β I i j E i j M i j μ i j ,
where I i j , E i j , μ i j and M i j represent the intersection, expected intersection, minimum intersection and maximum intersection of attributes with respect to A i and A j , respectively. If | A i | = | A j | , then α = 1 and β = 0; otherwise, α = 0 and β = 1.
Definition 3.
Given a set of reducts Z = { A 1 , A 2 , . . . , A M } , supposing that A T is a raw attribute set and k ¯ is the mean value of the numbers of attributes in reducts, the stability measure proposed by Nogueira with respect to Z is defined as:
Φ ^ ( Z ) = 1 1 | A T | a A T s a 2 k ¯ | A T | 1 k ¯ | A T | ,
where s a 2 is unbiased sample variance of the selected attribute a.
Following the above discussions, Akashata’s measure shown in Definition 2 is based on the similarity over reducts. Nogueira’s measure shown in Definition 3 is based on the frequency over attributes. It is not difficult to reveal that both of them take the advantage of the differences among reducts for obtaining the quantified value. The former pay much attention to the overall differences between two different reducts, while the latter focuses on the difference of each attribute among multiple reducts.

3. A New Hybrid Mechanism for Attribute Reduct

3.1. Dissimilarity for Attribute Reduction

Through FGS, we can observe that one and only one appropriate attribute will be selected and added into the potential reduct for each iteration of evaluating candidate attributes. Therefore, if the number of attributes is large, then the elapsed time of deriving reduct may still be unacceptable. For such reason, the strategy of searching reduct by considering the dissimilarity between attributes has been proposed by Rao et al. [17], which can simultaneously add more than one attribute into the potential reduct for each iteration of evaluating candidate attributes. The detail is shown in the following Algorithm 2.
Algorithm 2. Dissimilarity for Attribute Reduction (DAR)
Input: Decision system DS , ρ -constraint, fitness function ϕ and number of attributes in one combination t.
Output: One reduct A.
Step 1. Calculate the measure-value ρ ( A T ) over the raw attribute set AT;
Step 2. Calculate the dissimilarities between attributes such that Ψ = { Δ ( a , b ) : a , b A T } ;
          // Δ ( a , b ) denotes the distance between attributesaandb
Step 3. A = ;
Step 4. Do
               (1) Evaluate each candidate attribute a A T A by calculating ϕ ( a ) ;
               (2) Select a qualified attribute b A T A with the justifiable evaluation;
               (3) Obtain Ψ b = { Δ ( b , c ) : c A T ( A { b } ) } from Ψ ;
               (4) By Ψ b , derive attribute subset B with t 1 attributes, in which the attributes bear the striking dissimilarity to b;
               (5) A = A { b } B ;
                // Selection of a combination of~attributes
               (6) Calculate ρ ( A ) ;
            Until ρ -constraint is satisfied;
Step 5. Do
               (1)  a A , calculate ρ ( A { a } ) ;
               (2) If ρ -constraint is satisfied
                       A = A { a } ;
                  End
            Until A does not change or | A | = 1;
Step 6. Return A.
Compared with FGS, Algorithm 2 can significantly reduce the elapsed time of obtaining reduct. The time complexity of Algorithm 2 is O ( | U | 2 × | A T | × m ) , in which m = | A T | t . Obviously, O ( | U | 2 × | A T | × m ) < O ( | U | 2 × | A T | 2 ) , i.e., the complexity of Algorithm 2 is lower. However, for the reason that more than one attribute can be selected during each iteration, then Algorithm 2 may derive reduct with lower stability.

3.2. Ensemble Selector for Attribute Reduction

As what has been shown in Section 2.1, the fitness function is actually used to evaluate the importance of the candidate attributes. However, it must be pointed out that only one fitness function cannot be used to characterize the importance of the candidate attributes with multiple views. Furthermore, the using of only one fitness function does not take the distribution of the samples into account and then it will lead to the instability of deriving reduct. To fill such gap, Yang and Yao [13] have proposed the ensemble selector for attribute reduction, which employs multiple fitness functions for evaluating the candidate attributes. Immediately, the voting mechanism can be used for selecting the appropriate attribute. The detailed process will be shown in the following Algorithm 3.
Algorithm 3. Ensemble Selector for Attribute Reduction (ESAR)
Input: Decision system DS , ρ -constraint and fitness function ϕ 1 , ϕ 2 , . . . , ϕ s .
Output: One reduct A.
Step 1. Calculate the measure-value ρ ( A T ) over the raw attribute set AT;
Step 2. A = ;
Step 3. Do
               (1) Let multiset T = ;
               (2) For i = 1 to s
                        (i) Evaluate each candidate attribute a A T A by calculating ϕ i ( a ) ;
                        (ii) Select a qualified attribute b i A T A with the justifiable evaluation;
                        (iii) T = T { b i } ;
                  End
               (3) Select an attribute b T with the maximal frequency of occurrences;
                // Ensemble selector mechanism
               (4)  A = A { b } ;
               (5) Calculate ρ ( A ) ;
            Until ρ -constraint is satisfied;
Step 4. Do
               (1)  a A , calculate ρ ( A { a } ) ;
               (2) If ρ -constraint is satisfied
                       A = A { a } ;
                  End
            Until A does not change or | A | = 1;
Step 5. Return A.
By comparing with both Algorithms 1 and 2, the time complexity of Algorithm 3 is significantly increased. This is mainly because multiple fitness functions have been used. Without loss of generality, the time complexity of Algorithm 3 is O ( | U | 2 × | A T | 2 × s ) , in which s is the number of used fitness functions.

3.3. A New Hybrid Mechanism for Attribute Reduction

Reviewing the researches of attribute reduction, most of the previous approaches pay much attention to improving the performance of one aspect. For example, compared with Algorithm 1, Algorithm 2 can significantly reduce the elapsed time of calculating reduct while Algorithm 3 can generate reduct with higher stability. However, as what have been pointed out in Section 3.1 and Section 3.2, the above two algorithms are highly possible to lead to performance degradation in some other aspects, e.g., Algorithm 2 may derive reduct with lower stability because more than one attribute have been selected for each iteration and Algorithm 3 may be with higher time consumption because multiple fitness functions should be used.
Without loss of generality, it is expected to design an algorithm for deriving reduct with the following three characteristics.
(1)
Low time consumption of deriving reduct. Though many accelerators have been proposed for quickly deriving reduct, the dissimilarity approach presented in Algorithm 2 will be used in our research, this is mainly because such algorithm will provide us reduct with low stability, and then it is possible for us to optimize it for quickly obtaining reduct with high stability.
(2)
High stability of derived reduct. To search reduct with high stability, the ensemble selector presented in Algorithm 3 will be introduced into our research. However, though such algorithm may contribute to the reduct with high stability, it frequently result in a high time consumption of obtaining reduct. Then it is possible for us to optimize such algorithm for quickly obtaining reduct with high stability.
(3)
Competent classification of derived reduct. In the studies of Yang et al. [13] and Rao et al. [17], it has been pointed out that the reducts obtained by using Algorithms 2 and 3 possess the justifiable classification ability. For such reason, it is possible that the combination of those two algorithms can also preserve competent classification ability.
Therefore, a new hybrid mechanism for attribute reduction will be proposed. The specific process is shown in the following Algorithm 4.
Algorithm 4. Hybrid Mechanism for Attribute Reduction (HMAR)
Input: Decision system DS , ρ -constraint, fitness function ϕ 1 , ϕ 2 , . . . , ϕ s and number of attributes in one combination t.
Output: One reduct A.
Step 1. Calculate the measure-value ρ ( A T ) over the raw attribute set AT;
Step 2. Calculate the dissimilarities between attributes such that Ψ = { Δ ( a , b ) : a , b A T } ;
         // Δ ( a , b ) denotes the distance between attributesaandb
Step 3. A = ;
Step 4. Do
               (1) Let multiset T = ;
               (2) For i = 1 to s
                        (i) Evaluate each candidate attribute a A T A by calculating ϕ i ( a ) ;
                        (ii) Select a qualified attribute b i A T A with the justifiable evaluation;
                        (iii) T = T { b i } ;
                  End
               (3) Select an attribute b T with the maximal frequency of occurrences;
               // Ensemble selector~mechanism
               (4) Obtain Ψ b = { Δ ( b , c ) : c A T ( A { b } ) } from Ψ ;
               (5) By Ψ b , derive attribute subset B with t 1 attributes, in which the attributes bear the striking dissimilarity to b;
              // Using the main thinking of the dissimilarity~approach
               (6) A = A { b } B ;
               (7) Calculate ρ ( A ) ;
               Until ρ -constraint is satisfied;
Step 4. Do
               (1) a A , calculate ρ ( A { a } ) ;
               (2) If ρ -constraint is satisfied
                        A = A { a } ;
                  End
               Until A does not change or | A | =1;
Step 5. Return A.
In Algorithm 4, on the one hand, ensemble selector is employed, which can provide higher stability. On the other hand, the dissimilarity between candidate attributes is also taken into account, and then multiple attributes can be added into the potential reduct simultaneously, which contributes to the lower time complexity. Therefore, the time complexity of Algorithm 4 is O ( i = 1 s ( | U | 2 × | A T | × m ) ) = O ( | U | 2 × | A T | × m × s ) , in which m = | A T | t and s is the number of used fitness functions. Obviously, O ( | U | 2 × | A T | × m ) < O ( | U | 2 × | A T | × m × s ) < O ( | U | 2 × | A T | 2 × s ) , i.e., though the time complexity of Algorithm 4 is higher than that of Algorithm 2, compared with that of Algorithm 3, it is lower.

4. Experimental Analysis

4.1. Data Sets and Configuration

To demonstrate the effectiveness of the algorithm proposed in this paper, 20 UCI data sets have been selected to conduct the experiments. The detailed description of those data sets will be shown in the following Table 1. All the experiments have been carried out on a personal computer with Windows 10, AMD R7 3750H CPU (2.30 GHz) and 8.00 GB memory. The programming language is Matlab R2017a.

4.2. Experimental Setup

In the following experiments, the neighborhood rough set [8,10,29] will be employed to define forms of attribute reduction. Note that the 5-fold cross-validation is also used in our experiments to test the performances of reducts. In other words, for each data set, the set of raw samples is randomly partitioned into 5 groups with the same size, the 4 groups compose the training samples for computing reducts and the remaining 1 group is regarded as the testing samples. The threshold of approximation quality is set to be 0.95 (95%). Such value is beneficial to avoid a series of problems caused by too strict constraints, and reduce the time consumption of the experiments.
Furthermore, five state-of-the-art algorithms are selected for comparing with our proposed algorithm. The above five algorithms are shown as following.
(1)
Forward Greedy Searching (FGS) [8].
(2)
Attribute Group for Attribute Reduction (AGAR) [14].
(3)
Ensemble Selector for Attribute Reduction (ESAR) [13].
(4)
Dissimilarity for Attribute Reduction (DAR) [17].
(5)
Data-Guidance for Attribute Reduction (DGAR) [30].

4.3. Comparisons of Stability

In this section, the stability of reducts obtained by using different algorithms will be compared with each other. The detailed results will be shown in the following Table 2.
To further reveal the differences between the stability of reducts obtained by using HMAR and other five algorithms from the statistical perspective. The changing ratio related to the stability of reducts obtained by different measures will be shown in the following Table 3 and Table 4.
Following Table 2, Table 3 and Table 4, it is not difficult to observe that the reduct obtained by using HMAR is relatively high in terms of Akashat’s and Nogueira’s measures in most cases. Take “Ionosphere” data set and Akashata’s measure as an example, the stability of reducts which obtained by using FGS, AGAR, ESAR, DAR, DGAR and HMAR are 0.2562, 0.1213, 0.6545, 0.2986, 0.2818 and 0.4416, respectively. Obviously, though the stability of the reduct obtained by using HMAR is lower than that by using ESAR which can generate reduct with high stability, compared with FGS, AGAR, DAR and DGAR, the obtained reduct by using HMAR is equipped with higher stability.
Furthermore, from the perspective of changing ratio related to the stability, the above conclusion can be further verified. For example, by using Akashata’s measure and Nogueira’s measure, the changing ratios of stability are 0.7240, 2.6415, −0.3252, 0.4790, 0.5669 and 0.6772, 2.7850, −0.4651, 0.3397, 0.3121, respectively. Through observing these values, it can be observed that compared with FGS, AGAR, DAR and DGAR, the changing ratios related to stability of the reduct obtained by using HMAR are greater than 0. It means that the stability of the reduct obtained by using HMAR is higher than that by using those four approaches.

4.4. Comparisons of Elapsed Time

In this section, the elapsed time of obtaining reducts and the changing ratio related to the elapsed time of deriving reducts will be shown in the following Table 5 and Table 6.
With deep investigation of Table 5 and Table 6, it is not difficult to reveal that the time consumption of obtaining reduct by using HMAR is significantly lower than that by using ESAR. Take “Forest Type Mapping” data set as an example, the elapsed time of obtaining reducts by using FGS, AGAR, ESAR, DAR, DGAR and HMAR, 0.0161, 0.0137, 0.0217, 0.0050, 0.0724 and 0.0175 s are required, respectively. Obviously, though the elapsed time of obtaining reduct by using HMAR is higher than that by using FGS, AGAR and DAR, compare with that by using ESAR, the elapsed time of obtaining reduct by using HMAR is lower.
Furthermore, from the perspective of changing ratio related to the time consumption the above conclusion can be further verified. For example, the changing ratio of the elapsed time of obtaining reduct the by using HMAR relative to by using ESAR is −0.4879. It means that the elapsed time of obtaining reduct by using HMAR is significantly lower than that by using ESAR.

4.5. Comparisons of Classification Performances

In this section, the classification accuracies of reducts obtained by six different algorithms will be compared. KNN classifier is employed to test the classification performance. It is worth noting that the parameter k used in KNN classifier is 5. The corresponding results and the value of the changing ratio related to the classification accuracy of deriving reducts will be presented in the following Table 7 and Table 8, respectively.
Through observing Table 7 and Table 8, it is not difficult to observe that our proposed approach will not lead to poorer classification accuracy compared with other approaches. Take “Dermatology” data set as an example, if k = 5, then the classification accuracies of reducts obtained by using FGS, AGAR, ESAR, DAR, DGAR and HMAR, are 0.9273, 0.8990, 0.9344, 0.9262, 0.9488 and 0.8744, respectively.
Furthermore, from the perspective of changing ratio related to the classification accuracies the above conclusion can be further verified. For example, the changing ratios of classification accuracies are −0.0571, −0.0273, −0.0642, −0.0560 and −0.0758. Through observing the these values, it can be observed that the changing ratios of classification accuracy of obtained reduct by using HMAR relative to that by using other five approaches are between −0.1 and 0.1. It means that our proposed approach performs similarly with other compared algorithms in classification ability.

4.6. Discussion of Experimental Results

In Section 4.3, the stability of reduct is discussed. In most cases, the reducts obtained by using ESAR and HAMR have relatively high stability. Take “Ionosphere” data set and Akashata’s measure as the example, the stability of reducts which obtained by using FGS, AGAR, ESAR, DAR, DGAR and HMAR are 0.2562, 0.1213, 0.6545, 0.2986, 0.2818 and 0.4416, respectively.
In Section 4.4, the time consumption of obtaining reduct is discussed. In most data sets, the elapsed time of obtaining reduct by using HMAR is less than that by using ESAR. Take “Forest Type Mapping” data set as an example, the elapsed time of obtaining reducts by using FGS, AGAR, ESAR, DAR, DGAR and HMAR, 0.0161, 0.0137, 0.0217, 0.0050, 0.0724 and 0.0175 s are required, respectively.
In Section 4.5, the classification ability of reduct is discussed. The reducts obtained by using FGS, AGAR, ESAR, DAR and HMAR have similar classification ability. Take “Dermatology” data set as an example, if KNN classifier with k = 5 is used, then the classification accuracies of reducts obtained by using FGS, AGAR, ESAR, DAR, DGAR and HMAR, are 0.9273, 0.8990, 0.9344, 0.9262, 0.9488 and 0.8744, respectively.
Obviously, the approach of HMAR which proposed by us can be used to generate reduct with high stability. Furthermore, compared with the previous approaches which can generate reducts with high stability, our approach can obtain reduct in lesser time. Concurrently, it must be pointed out that the reduct obtained by using HMAR is equipped with justifiable classification ability. Above situations are mainly caused by both the ensemble selector and the acceleration strategy have been used in our hybrid searching.

5. Conclusions and Future Perspectives

In this paper, through considering multiple characteristics related to the searching of reduct, a hybrid based searching mechanism has not only been explored but also been developed. Different from the previous approaches in which only single characteristic is fully considered, our approach takes the time consumption of deriving reduct, the stability of the derived reduct and the classification ability offered by the reduct into account, simultaneously. The experimental results demonstrate that our proposed approach can make a trade-off between the stability of derived reduct and the elapsed time of searching reduct. This is mainly because both the ensemble selector and the acceleration strategy have been used in our hybrid searching. Moreover, it must be pointed out that the reduct derived by using our approach can also provide competent classification performance by comparing with several state-of-the-art approaches. In general, a hybrid based searching mechanism is proposed by us for attribute reduction considering multiple characteristics, simultaneously. However, in terms of the time consumption of obtaining reduct and the classification ability of the obtained reduct, there are some limitations in such two performances of our approach. Therefore, we will confront the following challenges in the further research.
(1)
The elapsed time of obtaining reducts can be further reduced through combining some other acceleration strategies [31,32].
(2)
Supervised information granulation [33,34] strategy can be further introduced into our approach for improving the generalization performance offered by the reduct.

Author Contributions

Conceptualization, W.Y.; Data curation, W.Y., Y.C., J.S., H.Y. and X.Y.; Formal analysis, Y.C., J.S., H.Y. and X.Y.; Funding acquisition, X.Y.; Investigation, X.Y.; Methodology, X.Y.; Project administration, X.Y.; Software, W.Y.; Writing–original draft, W.Y.; Writing–review & editing, W.Y. and X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of China (Nos. 62076111, 62006099, 62006128, 61906078), Natural Science Foundation of Jiangsu, China (No. BK20191457), Open Project Foundation of Intelligent Information Processing Key Laboratory of Shanxi Province (No. CICIP2020004) and the Key Laboratory of Oceanographic Big Data Mining & Application of Zhejiang Province (No. OBDMA202002).

Data Availability Statement

Data available in a publicly accessible repository that does not issue DOIs. Publicly available datasets were analyzed in this study. This data can be found here: http://archive.ics.uci.edu/ml/datasets/Diabetes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Y.; Song, J.J.; Liu, K.Y.; Lin, Y.J.; Yang, X.B. Combined accelerator for attribute reduction: A sample perspective. Math. Probl. Eng. 2020, 2020, 2350627. [Google Scholar] [CrossRef]
  2. Jia, X.Y.; Rao, Y.; Shang, L.; Li, T.J. Similarity-based attribute reduction in rough set theory: A clustering perspective. Int. J. Mach. Learn. Cybern. 2020, 11, 1047–1060. [Google Scholar] [CrossRef]
  3. Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Kluwer Academic Publishers: Amsterdam, The Netherlands, 1992. [Google Scholar]
  4. Tsang, E.C.C.; Song, J.J.; Chen, D.G.; Yang, X.B. Order based hierarchies on hesitant fuzzy approximation space. Int. J. Mach. Learn. Cybern. 2019, 10, 1407–1422. [Google Scholar] [CrossRef]
  5. Tsang, E.C.C.; Hu, Q.H.; Chen, D.G. Feature and instance reduction for PNN classifiers based on fuzzy rough sets. Int. J. Mach. Learn. Cybern. 2016, 7, 1–11. [Google Scholar]
  6. Wang, C.Z.; Huang, Y.; Shao, M.W.; Fan, X.D. Fuzzy rough set-based attribute reduction using distance measures. Knowl. Based Syst. 2019, 164, 205–212. [Google Scholar] [CrossRef]
  7. Hu, Q.H.; Zhang, L.; Chen, D.G.; Pedrycz, W.; Yu, D.R. Gaussian kernel based fuzzy rough sets: Model uncertainty measures and applications. Int. J. Approx. Reason. 2010, 51, 453–471. [Google Scholar] [CrossRef] [Green Version]
  8. Hu, Q.H.; Yu, D.R.; Liu, J.F.; Wu, C.X. Neighborhood rough set based heterogeneous feature subset selection. Inf. Sci. 2008, 178, 3577–3594. [Google Scholar] [CrossRef]
  9. Ko, Y.C.; Fujita, H. An evidential analytics for buried information in big data samples: Case study of semiconductor manufacturing. Inf. Sci. 2019, 486, 190–203. [Google Scholar] [CrossRef]
  10. Liu, K.Y.; Yang, X.B.; Yu, H.L.; Mi, J.S.; Wang, P.X.; Chen, X.J. Rough set based semi-supervised feature selection via ensemble selector. Knowl. Based Syst. 2019, 165, 282–296. [Google Scholar] [CrossRef]
  11. Hu, Q.H.; Zhang, L.J.; Zhou, Y.C.; Pedrycz, W. Large-scale multi-modality attribute reduction with multi-kernel fuzzy rough sets. IEEE Trans. Fuzzy Syst. 2018, 26, 226–238. [Google Scholar] [CrossRef]
  12. Qian, Y.H.; Wang, Q.; Cheng, H.H.; Liang, J.Y.; Dang, C.Y. Fuzzy-rough feature selection accelerator. Fuzzy Sets Syst. 2017, 258, 61–78. [Google Scholar] [CrossRef]
  13. Yang, X.B.; Yao, Y.Y. Ensemble selector for attribute reduction. Appl. Soft Comput. 2018, 70, 1–11. [Google Scholar] [CrossRef]
  14. Chen, Y.; Liu, K.Y.; Song, J.J.; Fujita, H.; Qian, Y.H. Attribute group for attribute reduction. Inf. Sci. 2020, 535, 64–80. [Google Scholar] [CrossRef]
  15. Gao, Y.; Chen, X.J.; Yang, X.B.; Wang, P.X.; Mi, J.S. Ensemble-based neighborhood attribute reduction: A multigranularity view. Complexity 2019, 2019, 2048934. [Google Scholar] [CrossRef]
  16. Liu, K.Y.; Yang, X.B.; Fujita, H.; Liu, D.; Yang, X.; Qian, Y.H. An efficient selector for multi-granularity attribute reduction. Inf. Sci. 2019, 505, 457–472. [Google Scholar] [CrossRef]
  17. Rao, X.S.; Yang, X.B.; Yang, X.; Chen, X.J.; Liu, D.; Qian, Y.H. Quickly calculating reduct: An attribute relationship based approach. Knowl. Based Syst. 2020, 200, 106014. [Google Scholar] [CrossRef]
  18. Song, J.J.; Tsang, E.C.C.; Chen, D.G.; Yang, X.B. Minimal decision cost reduct in fuzzy decision-theoretic rough set model. Knowl. Based Syst. 2017, 126, 104–112. [Google Scholar] [CrossRef]
  19. Wang, Y.B.; Chen, X.J.; Dong, K. Attribute reduction via local conditional entropy. Int. J. Mach. Learn. Cybern. 2019, 10, 3619–3634. [Google Scholar] [CrossRef]
  20. Li, J.Z.; Yang, X.B.; Song, X.N.; Li, J.H.; Wang, P.X.; Yu, D.J. Neighborhood attribute reduction: A multi-criterion approach. Int. J. Mach. Learn. Cybern. 2019, 10, 731–742. [Google Scholar] [CrossRef]
  21. Jia, X.Y.; Shang, L.; Zhou, B.; Yao, Y.Y. Generalized attribute reduct in rough set theory. Knowl. Based Syst. 2016, 91, 204–218. [Google Scholar] [CrossRef]
  22. Yao, Y.Y.; Zhao, Y.; Wang, J. On reduct construction algorithms. Trans. Comput. Sci. II 2008, 5150, 100–117. [Google Scholar]
  23. Ju, H.R.; Yang, X.B.; Yu, H.L.; Li, T.J.; Du, D.J.; Yang, J.Y. Cost-sensitive rough set approach. Inf. Sci. 2016, 355, 282–298. [Google Scholar]
  24. Yang, X.B.; Xu, S.P.; Dou, H.L.; Song, X.N.; Yu, H.L.; Yang, J.Y. Multigranulation rough set: A multiset based strategy. Int. J. Comput. Intell. Syst. 2017, 10, 277–292. [Google Scholar] [CrossRef] [Green Version]
  25. Dunne, K.; Cunningham, P.; Azuaje, F. Solutions to instability problems with sequential wrapper-based approaches to feature selection. J. Mach. Learn. Res. 2002, 1–22. Available online: https://www.scss.tcd.ie/publications/tech-reports/reports.02/TCD-CS-2002-28.pdf (accessed on 8 January 2021).
  26. Zhang, M.; Zhang, L.; Zou, J.F.; Yao, C.; Xiao, H.; Liu, Q.; Wang, J.; Wang, D.; Wang, C.G.; Guo, Z. Evaluating reproducibility of differential expression discoveries in microarray studies by considering correlated molecular changes. Bioinformatics 2009, 25, 1662–1668. [Google Scholar] [CrossRef]
  27. Naik, A.K.; Kuppili, V.; Edla, D.R. A new hybrid stability measure for feature selection. Appl. Intell. 2020, 50, 3471–3486. [Google Scholar] [CrossRef]
  28. Nogueira, S.; Sechidis, K.; Brown, G. On the stability of feature selection algorithms. J. Mach. Learn. Res. 2018, 18, 1–54. [Google Scholar]
  29. Jiang, Z.H.; Yang, X.B.; Yu, H.L.; Liu, D.; Wang, P.X.; Qian, Y.H. Accelerator for multi-granularity attribute reduction. Knowl. Based Syst. 2019, 177, 145–158. [Google Scholar] [CrossRef]
  30. Jiang, Z.H.; Dou, H.L.; Song, J.J.; Wang, P.X.; Yang, X.B.; Qian, Y.H. Data-guided multi-granularity selector for attribute reduction. Appl. Intell. 2020. [Google Scholar] [CrossRef]
  31. Liu, Y.; Huang, W.L.; Jiang, Y.L.; Zeng, Z.Y. Quick attribute reduct algorithm for neighborhood rough set model. Inf. Sci. 2014, 271, 65–81. [Google Scholar] [CrossRef]
  32. Qian, Y.H.; Liang, J.Y.; Pedrycz, W.; Dang, C.Y. Positive approximation: An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef] [Green Version]
  33. Liu, K.Y.; Yang, X.B.; Yu, H.L.; Fujita, H. Supervised information granulation strategy for attribute reduction. Int. J. Mach. Learn. Cybern. 2020, 11, 2149–2163. [Google Scholar] [CrossRef]
  34. Jiang, Z.H.; Liu, K.Y.; Yang, X.B.; Yu, H.L.; Qian, Y.H. Accelerator for supervised neighborhood based attribute reduction. Int. J. Approx. Reason. 2019, 119, 122–150. [Google Scholar] [CrossRef]
Figure 1. The framework of hybrid mechanism.
Figure 1. The framework of hybrid mechanism.
Information 12 00025 g001
Table 1. Data sets description.
Table 1. Data sets description.
IDData SetsSamplesAttributesDecision Classes
1Breast Cancer Wisconsin (Diagnostic)569302
2Connectionist Bench (Sonar, Mines vs. Rocks)208602
3Dermatology366346
4Fertility10092
5Forest Type Mapping523274
6Glass Identification21496
7Ionosphere351342
8Libras Movement3609015
9LSVT Voice Rehabilitation1262562
10Lymphography98183
11QSAR Biodegradation1055412
12Quality Assessment of Digital Colposcopies287622
13Statlog (Australian Credit Approval)690142
14Statlog (Heart)270132
15Statlog (Image Segmentation)2310187
16Steel Plates Faults1941332
17Synthetic Control Chart Time Series600606
18Urban Land Cover6751479
19Waveform Database Generator (Version 1)5000213
20Wine178133
Table 2. Stability of reducts based on different measures.
Table 2. Stability of reducts based on different measures.
  IDAkashata’s MeasureNogueira’s Measure
FGSAGARESARDARDGARHMARFGSAGARESARDARDGARHMAR
10.43760.18220.86020.32180.34190.43790.41260.16170.86320.35960.37820.3935
20.10900.10770.94000.52330.06100.69640.10480.10350.83330.51420.05800.6305
30.52120.28930.95290.39920.52020.49330.42920.24420.78820.33110.39760.4234
40.40740.24420.77320.46010.22550.71570.42460.25160.90420.60650.35460.7274
50.20060.24570.80840.63060.37160.27450.37190.25880.90890.70230.57840.4402
60.27660.20411.00000.57540.54630.29080.42570.24171.00000.77760.71350.4389
70.25620.12130.65450.29860.28180.44160.22360.09910.70110.27990.28580.3750
80.31520.09230.71220.31020.23720.54120.29920.10910.85880.29490.33040.4629
90.29640.29140.98260.38070.86210.73490.30720.30250.91780.38760.73240.7257
100.47630.25000.91300.17230.37610.49640.38820.20970.89330.12750.26850.4013
110.80790.45080.54580.66690.36260.80830.70160.42170.75290.78240.61080.7129
120.39810.38771.00000.58670.47680.79330.39810.38771.00000.58670.49830.7933
130.55370.31290.64490.94930.44950.55370.70370.32780.85480.97930.63310.7037
140.75610.30970.92820.13320.51580.45920.71940.28960.97050.33180.54670.4139
150.56200.33961.00000.89770.47220.63830.72960.35061.00000.95760.62790.8050
160.95830.89410.89270.86280.87810.97160.96570.90490.93450.83710.82370.9788
170.34430.24410.94700.53990.67820.75340.33350.23460.87200.53300.60010.6988
180.29270.24940.95940.56000.36150.68610.29020.24710.90020.54670.33590.6338
190.29910.11920.85560.38280.42450.52480.32000.11350.92880.43040.46130.5132
200.33490.24230.89130.42580.52880.46190.28010.20200.88540.40950.44710.3938
Average0.43020.27890.86310.50390.44860.58870.44140.27310.88840.53880.48410.5833
Table 3. The changing ratio related to stability of reducts based on Akashata’s measure.
Table 3. The changing ratio related to stability of reducts based on Akashata’s measure.
IDHMAR & FGSHMAR & AGARHMAR & ESARHMAR & DARHMAR & DGAR
10.00091.4036−0.49090.36100.2807
25.38675.4643−0.25920.330910.4126
3−0.05350.7050−0.48230.2359−0.0517
40.75701.9303−0.07430.55562.1740
50.36850.1175−0.6604−0.5647−0.2613
60.05160.4252−0.7092−0.4946−0.4676
70.72402.6415−0.32520.47900.5669
80.71694.8644−0.24010.74471.2817
91.47921.5223−0.25210.9302−0.1475
100.04220.9860−0.45631.88100.3199
110.00060.79310.48090.21211.2290
120.99291.0460−0.20670.35230.6638
130.00000.7695−0.1414−0.41670.2319
14−0.39260.4829−0.50532.4467−0.1097
150.13570.8795−0.3617−0.28900.3518
160.01390.08680.08840.12610.1065
171.18812.0861−0.20440.39540.1109
181.34381.7509−0.28490.22520.8982
190.75463.4040−0.38660.37100.2362
200.37910.9061−0.48180.0848−0.1265
Average0.69451.6132−0.29770.39840.8850
Table 4. The changing ratio related to stability of reducts based on Nogueira’s measure.
Table 4. The changing ratio related to stability of reducts based on Nogueira’s measure.
IDHMAR & FGSHMAR & AGARHMAR & ESARHMAR & DARHMAR & DGAR
1−0.04631.4333−0.54410.09430.0406
25.01865.0904−0.24340.22619.8752
3−0.01340.7341−0.46280.27860.0651
40.71321.8906−0.19560.19921.0515
50.18370.7006−0.5157−0.3733−0.2390
60.03100.8158−0.5611−0.4356−0.3848
70.67722.7850−0.46510.33970.3121
80.54723.2431−0.46100.56970.4009
91.36241.3986−0.20930.8720−0.0092
100.03390.9138−0.55072.14660.4946
110.01610.6907−0.0531−0.08880.1672
120.99291.0460−0.20670.35230.5920
130.00001.1466−0.1768−0.28140.1115
14−0.42470.4291−0.57350.2473−0.2430
150.10341.2963−0.1950−0.15940.2820
160.01360.08160.04740.16930.1883
171.09541.9788−0.19860.31120.1644
181.18411.5649−0.29590.15930.8867
190.60393.5202−0.44740.19230.1125
200.40600.9493−0.5552−0.0383−0.1191
Average0.62491.5854−0.34320.23910.6875
Table 5. The elapsed time of obtaining the reducts (seconds).
Table 5. The elapsed time of obtaining the reducts (seconds).
IDFGSAGARESARDARDGARHMAR
12.28092.09352.52330.594512.62581.9478
20.13730.12210.35280.07221.51680.2124
30.24740.21000.51560.10932.22880.2640
40.01610.01370.02170.00500.07240.0175
51.75241.33891.52660.33287.26821.3123
60.02520.01850.03180.00580.37100.0252
70.26940.26410.42290.09191.84000.2420
82.50361.63986.78670.498113.35381.7414
90.20860.22012.26280.239916.64732.0956
100.01500.01300.04200.00700.12700.0251
1125.703323.629529.47485.7647122.944221.1693
120.05050.05410.92400.02590.85750.0823
131.49531.00721.17250.28325.93331.0844
140.07400.06520.11140.02200.35210.0762
1528.797322.079422.89405.7499305.839920.2366
1611.742416.205168.61828.1615154.901416.4106
172.19931.90111.16491.000323.83150.7656
188.90028.09129.44835.9042141.75936.5005
19142.1194112.4183120.049732.7325632.6569103.9153
200.02860.02560.04800.01140.18280.0360
Average11.42839.570513.41963.080672.26558.9080
Table 6. The changing ratio related to the elapsed time of deriving reducts.
Table 6. The changing ratio related to the elapsed time of deriving reducts.
IDHMAR & FGSHMAR & AGARHMAR & ESARHMAR & DARHMAR & DGAR
1−0.1460−0.0696−0.22812.2765−0.8457
20.54770.7397−0.39781.9413−0.8600
30.06730.2570−0.48791.4144−0.8816
40.08710.2828−0.19362.4795−0.7579
5−0.2512−0.0199−0.14042.9433−0.8194
60.00060.3646−0.20603.3309−0.9320
7−0.1015−0.0838−0.42781.6345−0.8685
8−0.30440.0620−0.74342.4965−0.8696
99.04398.5205−0.07397.7355−0.8741
100.67820.9374−0.40142.6051−0.8020
11−0.1764−0.1041−0.28182.6722−0.8278
120.62990.5203−0.91102.1718−0.9041
13−0.27480.0766−0.07522.8284−0.8172
140.03030.1697−0.31552.4605−0.7835
15−0.2973−0.0835−0.11612.5195−0.9338
160.39760.0127−0.76081.0107−0.8941
17−0.6519−0.5973−0.3428−0.2347−0.9679
18−0.2696−0.1966−0.31200.1010−0.9541
19−0.2688−0.0756−0.13442.1747−0.8357
200.25920.4060−0.25072.1671−0.8031
Average0.45000.5559−0.34002.3364−0.8616
Table 7. Classification accuracies based on KNN classifier (k = 5).
Table 7. Classification accuracies based on KNN classifier (k = 5).
IDFGSAGARESARDARDGARHMAR
10.96090.96370.96470.96290.96140.9634
20.76690.77260.77030.79650.78790.7390
30.92730.89900.93440.92620.94880.8744
40.86350.86300.86000.86550.87100.8790
50.87620.88160.87920.88200.88010.8770
60.64260.65400.63990.63990.64080.6422
70.85670.84540.86330.86330.86660.8461
80.69820.68790.72500.67390.71250.5939
90.81990.82020.75440.81270.74370.6838
100.70500.72780.72920.70050.78860.7164
110.85700.85640.85430.85620.85530.8557
120.75140.75140.78420.75120.72950.7393
130.84360.84990.84350.84350.84350.8436
140.81810.81060.80560.80310.81830.8120
150.95260.95180.95280.95270.95270.9524
160.99910.99820.97850.99580.99840.9996
170.81890.84970.50890.78550.69840.6687
180.73140.72810.76310.73680.73540.7235
190.79370.79230.81130.80580.79840.7935
200.95630.95160.91030.95980.96460.9337
Average0.83200.83280.81660.83070.82980.8069
Table 8. The changing ratio related to classification accuracies based on KNN classifier.
Table 8. The changing ratio related to classification accuracies based on KNN classifier.
IDHMAR & FGSHMAR & AGARHMAR & ESARHMAR & DARHMAR & DGAR
10.0026−0.0004−0.00140.00050.0021
2−0.0363−0.0435−0.0406−0.0722−0.0621
3−0.0571−0.0273−0.0642−0.0560−0.0785
40.01800.01850.02210.01560.0092
50.0009−0.0052−0.0025−0.0056−0.0036
6−0.0007−0.01800.00360.00360.0022
7−0.01240.0008−0.0199−0.0200−0.0236
8−0.1494−0.1367−0.1808−0.1187−0.1665
9−0.1659−0.1663−0.0935−0.1585−0.0805
100.0161−0.0157−0.01750.0226−0.0916
11−0.0015−0.00080.0017−0.00050.0006
12−0.0161−0.0161−0.0573−0.01590.0135
130.0000−0.00740.00020.00020.0002
14−0.00750.00180.00800.0111−0.0077
15−0.00010.0007−0.0004−0.0003−0.0002
160.00050.00140.02160.00380.0012
17−0.1835−0.21300.3139−0.1487−0.0426
18−0.0108−0.0063−0.0519−0.0181−0.0162
19−0.00030.0015−0.0219−0.0153−0.0062
20−0.0235−0.01870.0258−0.0271−0.0320
Average−0.0314−0.0325−0.0078−0.0300−0.0291
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, W.; Chen, Y.; Shi, J.; Yu, H.; Yang, X. Ensemble and Quick Strategy for Searching Reduct: A Hybrid Mechanism. Information 2021, 12, 25. https://doi.org/10.3390/info12010025

AMA Style

Yan W, Chen Y, Shi J, Yu H, Yang X. Ensemble and Quick Strategy for Searching Reduct: A Hybrid Mechanism. Information. 2021; 12(1):25. https://doi.org/10.3390/info12010025

Chicago/Turabian Style

Yan, Wangwang, Yan Chen, Jinlong Shi, Hualong Yu, and Xibei Yang. 2021. "Ensemble and Quick Strategy for Searching Reduct: A Hybrid Mechanism" Information 12, no. 1: 25. https://doi.org/10.3390/info12010025

APA Style

Yan, W., Chen, Y., Shi, J., Yu, H., & Yang, X. (2021). Ensemble and Quick Strategy for Searching Reduct: A Hybrid Mechanism. Information, 12(1), 25. https://doi.org/10.3390/info12010025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop