# Hyper-Heuristic Framework for Sequential Semi-Supervised Classification Based on Core Clustering

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methodology

#### 2.1. General Framework

- Receive new chunk of the data for testing.
- Use online-offline clustering block for assigning the records of the chunk to their respective clusters.
- Use the NN block to predict the classes of the records of the chunk.
- Use the class prediction block to use the prediction of the NN with the result for the online-offline clustering to perform class labeling of the clusters.
- Find the error between the result of the class labeling of the clusters and the result of the NN as an objective function for the GA block that will keep modifying the parameters of the online-offline clustering block until reaching the minimum error.
- Update the knowledge of NN when receiving labels of old records (ground truth).
- Use the configuration error to switch between two types of errors: error with respect to NN ${E}_{p}$ and error with respect to ground truth ${E}_{c}$.
- If stream data is not finished go to step 1.

#### 2.2. Dataset Representation

#### 2.3. The Core (Online and Off-Line Phase)

#### 2.3.1. Online Phase for Micro-Clusters Update

- (1)
- Grid: is the first stage of stream data reduction in the online phase. Grid represents the mapping of the data to a pre-defined space partitioning in the dimensions of the data. Square grids are typically the simplest. The grid granularity denotes the resolution of the grid partitioning in the different dimensions. To accommodate the evolving nature of data and consume less memory when the dimension of data is high, grid granularity can be used as part of the solution space in the meta-heuristic part in the previous framework. Grid granularity is denoted as g.
- (2)
- Micro-clusters update: The input of the online micro-clusters phase involves streamed data after mapping to the grid. The output is the micro-cluster update. They are intermediate results of clustering before making the final cluster. A micro-cluster is generated from a grid using a procedure presented in Table 2. Assuming that the data are streamed with dimension m or any sample of the data ${\mathrm{p}}_{\mathrm{t}}^{\left(\mathrm{j}\right)}={\left({\mathrm{x}}_{\mathrm{t}}^{1}{\mathrm{x}}_{\mathrm{t}}^{2},\dots .{\mathrm{x}}_{\mathrm{t}}^{\mathrm{m}}\right)}^{\mathrm{j}}$, then, ${\mathrm{p}}_{\mathrm{t}}^{\left(\mathrm{j}\right)}$ has a weight that is given by Equation (2).$$\mathrm{w}\left({\mathrm{t}}_{\mathrm{c}},{\mathrm{p}}_{\mathrm{t}}^{\left(\mathrm{j}\right)}\right)={2}^{-\mathsf{\lambda}\left({\mathrm{t}}_{\mathrm{c}}-\mathrm{t}\right)},\text{}\mathsf{\lambda}0$$

#### 2.3.2. Offline Phase for Clustering

#### 2.4. The Genetic Algorithm (GA)

- $g$ denotes the grid granularity.
- $MinPts$ denotes the threshold of core mini cluster to use it for offline phase.
- $\mathsf{\lambda}$ denotes the forgetting factor.
- ${\mathrm{T}}_{\mathrm{L}}$ denotes the threshold for converting a core mini cluster to an outlier.
- ${\mathrm{T}}_{\mathrm{U}}$ denotes the threshold for converting an outlier to core mini cluster.

- $\mathrm{where}$,
- ${S}_{A}^{\prime}=\left({g}_{a}{}^{\prime},MinPt{s}_{a}{}^{\prime},\text{}{\lambda}_{a}{}^{\prime},{T}_{Lc}{}^{\prime},\text{}{T}_{Uc}{}^{\prime}\right)$
- ${g}_{a}{}^{\prime}={L}_{g}\left({g}_{a}\right)$,
- $\text{}{\lambda}_{a}{}^{\prime}={L}_{\lambda}\left(,\text{}{\lambda}_{a}\right)$
- $MinPt{s}_{a}{}^{\prime}={L}_{MinPts}\left(MinPt{s}_{a}\right)$,
- ${T}_{Lc}{}^{\prime}={L}_{{T}_{2}}\left({T}_{La}\right)$ and $\text{}{T}_{Uc}{}^{\prime}={L}_{{T}_{3}}\left({T}_{Ua}\right)$

#### 2.5. The Neural Network

#### 2.5.1. Boosting Phase Using the Initial Chunks

- Assign arbitrary input weight ${w}_{i}$ and bias ${b}_{i}$ based on a random variable with center ${u}_{i}$ and standard deviation ${\sigma}_{i},\text{}i=1,\dots ,L$.
- Calculate the initially hidden layer output matrix in Equation (12).$${\mathrm{H}}_{0}={[{\mathrm{h}}_{1},\dots ,{\mathrm{H}}_{\mathrm{L}}]}^{\mathrm{T}}$$
- Estimate the initial output weight by the Equation (13).$${\mathsf{\beta}}^{\left(0\right)}={\mathrm{M}}_{0}{\mathrm{H}}_{0}^{\mathrm{T}}{\mathrm{Y}}_{0}$$

#### 2.5.2. Sequential Learning Phase

- Calculate the hidden layer output vector by using Equation (14).$${\mathrm{h}}_{\left(\mathrm{k}+1\right)}={[\mathrm{g}\left({\mathrm{w}}_{1}.{\mathrm{x}}_{\mathrm{i}}+{\mathrm{b}}_{1}\right),\dots ,\mathrm{g}\left({\mathrm{w}}_{\mathrm{L}}.{\mathrm{x}}_{\mathrm{i}}+{\mathrm{b}}_{\mathrm{L}}\right)]}^{\mathrm{T}}$$
- Calculate the latest output weight ${\beta}^{\left(k+1\right)}$ based on Recursive Least Square (RLS) algorithm shown in Equations (15) and (16).$${\mathrm{M}}_{\mathrm{k}+1}={\mathrm{M}}_{\mathrm{k}}-\frac{{\mathrm{M}}_{\mathrm{k}}{\mathrm{h}}_{k}+1{\mathrm{h}}_{\mathrm{k}}^{\mathrm{T}}+1{\mathrm{M}}_{\mathrm{k}}}{1+{\mathrm{h}}_{\mathrm{k}+1}^{\mathrm{T}}{\mathrm{M}}_{\mathrm{k}}{\mathrm{h}}_{\mathrm{k}+1}}$$$${\mathsf{\beta}}^{\left(\mathrm{k}+1\right)}={\mathsf{\beta}}^{\left(\mathrm{k}\right)}+{\mathrm{M}}_{\mathrm{k}+1}{\mathrm{h}}_{\mathrm{k}+1}\left({\mathrm{y}}_{\mathrm{i}}^{\mathrm{T}}-{\mathrm{h}}_{\mathrm{k}+1}^{\mathrm{T}}{\mathsf{\beta}}^{\left(\mathrm{k}\right)}\right)$$

#### 2.6. The Configuration

#### 2.6.1. Hyper-Heuristic Variant 1

#### 2.6.2. Hyper-Heuristic Variant 2

#### 2.7. Class Prediction

#### 2.8. Big O Analysis for Computation

- where,
- ${N}_{I}$ denotes the number of iterations.
- ${N}_{s}$ denotes the number of solutions.
- ${N}_{k}$ the size of the chunk.
- $\tilde{N}$ denotes the number of hidden neurons.
- $m$ denotes the number of features.

#### 2.9. Evaluation Measurement

#### Accuracy

- Precision (PPV)PVV represents the number of true positive predicted by the classifier divided by the number of all predicted positive records [33]. The formula is calculated in Equation (18).$$\mathrm{PPV}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}$$
- Recall (TPR)TPR represents the number of TP predicted by the classifier divided by the number of all tested positive records [34]. The formula is calculated in Equation (19).$$TPR=\frac{TP}{TP+FN}$$
- Specificity (TNR)TNR represents the number of TN predicted by the classifiers divided by the number of all tested negative records [35]. The formula is calculated in Equation (20).$$TNR=\frac{TN}{N}$$
- NPVThis measure represents the negative predicted value calculated as the number of predicted TN value over the total number of negative values [36]. The formula is calculated in Equation (21).$$NPV=\frac{TN}{TN+FN}$$
- G-meanThis measure is calculated on the basis of the precision and recall [33]. The formula is calculated in Equation (22).$$G\_mean=\frac{TP}{\sqrt{\left(TP+FP\right)\left(TP+FN\right)}}$$
- F-measureF-measure is the harmonic mean of the precision and recall [33]. The formula is calculated in Equation (23).$$F\_measure=\frac{2\times precision\times recall}{precision+recall}$$

#### 2.10. Dataset Description

#### KDD’99

## 3. Experimental Evaluation

## 4. Conclusions and Future Work

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Chen, F.; Deng, P.; Wan, J.; Zhang, D.; Vasilakos, A.V.; Rong, X. Data mining for the internet of things: Literature review and challenges. Int. J. Distrib. Sens. Netw.
**2015**, 2015. [Google Scholar] [CrossRef] [Green Version] - Abaker, I.; Hashem, T.; Yaqoob, I.; Badrul, N.; Mokhtar, S.; Gani, A.; Ullah, S. The rise of “big data” on cloud computing: Review and open research issues. Inf. Syst.
**2015**, 47, 98–115. [Google Scholar] - Bello-orgaz, G.; Jung, J.J.; Camacho, D. Social big data: Recent achievements and new challenges. Inf. Fusion
**2016**, 28, 45–59. [Google Scholar] [CrossRef] [PubMed] - Lee, J.; Ardakani, H.D.; Yang, S.; Bagheri, B. Industrial big data analytics and cyber-physical systems for future maintenance & service innovation. Procedia CIRP
**2015**, 38, 3–7. [Google Scholar] - Moustafa, N.; Creech, G.; Slay, J. Big data analytics for intrusion detection system: Statistical decision-making using finite dirichlet mixture models. In Data Analytics and Decision Support for Cybersecurity; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Chen, M.; Ma, Y.; Song, J.; Bin, C.L. Smart clothing: Connecting human with clouds and big data for sustainable health monitoring. Mob. Netw. Appl.
**2016**, 21, 825–845. [Google Scholar] [CrossRef] - Goldstein, M.; Uchida, S. A comparative evaluation of unsupervised anomaly detection algorithms for multivariate data. PLoS ONE
**2016**, 11, e0152173. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Lughofer, E.; Sayed-mouchaweh, M. Autonomous data stream clustering implementing split-and-merge concepts—Towards a plug-and-play approach. Inf. Sci.
**2015**, 304, 54–79. [Google Scholar] [CrossRef] - Pool, J.; Dally, W.J. Learning Both Weights and Connections for Efficient Neural Networks. Advances in Neural Information Processing Systems. 2015, pp. 1135–1143. Available online: https://papers.nips.cc/paper/5784-learning-both-weights-and-connections-for-efficient-neural-network.pdf (accessed on 18 June 2020).
- Kang, M.; Kang, J. Intrusion detection system using deep neural network for in-vehicle network security. PLoS ONE
**2016**, 11, e0155781. [Google Scholar] [CrossRef] [PubMed] - Maitland, E. Decision making and uncertainty: The role of heuristics and experience in assessing a politically hazardous environment. Strateg. Manag. J.
**2014**, 36, 1554–1578. [Google Scholar] [CrossRef] - Metiaf, A.L.I.; Hong, W.U.Q.; Aljeroudi, Y. Searching with direction awareness: Multi-objective genetic algorithm based on angle quantization and crowding distance moga-aqcd. IEEE Access
**2018**, 7, 10196–10207. [Google Scholar] [CrossRef] - Zhang, Z.; Hong, W.C.; Li, J. Electric load forecasting by hybrid self-recurrent support vector regression model with variational mode decomposition and improved cuckoo search algorithm. IEEE Access
**2020**, 8, 14642–14658. [Google Scholar] [CrossRef] - Kundra, H.; Sadawarti, H. Hybrid algorithm of Cuckoo Search and Particle Swarm Optimization. Res. J. Inf. Technol.
**2015**, 7, 58–69. [Google Scholar] - Hong, W.C.; Dong, Y.; Lai, C.Y.; Chen, L.Y.; Wei, S.Y. SVR with hybrid chaotic immune algorithm for seasonal load demand forecasting. Energies
**2011**, 4, 960–977. [Google Scholar] [CrossRef] [Green Version] - Deng, S.; Wang, B.; Huang, S.; Yue, C.; Zhou, J.; Wang, G. Self-adaptive framework for efficient stream data classification on storm. IEEE Trans. Syst. Man Cybern. Syst.
**2020**, 50, 123–136. [Google Scholar] [CrossRef] - Li, Y.; Wang, Y.; Liu, Q.; Bi, C.; Jiang, X.; Sun, S. Incremental semi-supervised learning on streaming data. Pattern Recognit.
**2019**, 88, 383–396. [Google Scholar] [CrossRef] - Ksieniewicz, P.; Woźniak, M.; Cyganek, B.; Kasprzak, A.; Walkowiak, K. Data stream classification using active learned neural networks. Neurocomputing
**2019**, 353, 74–82. [Google Scholar] [CrossRef] - Junior, B.; do Carmo Nicoletti, M. An iterative boosting-based ensemble for streaming data classification. Inf. Fusion
**2019**, 45, 66–78. [Google Scholar] [CrossRef] - Casalino, G.; Castellano, G.; Mencar, C. Data stream classification by dynamic incremental semi-supervised fuzzy clustering. Int. J. Artif. Intell. Tools
**2019**, 28, 1–26. [Google Scholar] [CrossRef] - Noorbehbahani, F.; Fanian, A.; Mousavi, R.; Hasannejad, H. An incremental intrusion detection system using a new semi-supervised stream classification method. Int. J. Commun. Syst.
**2017**, 30, 1–26. [Google Scholar] [CrossRef] - Skrjanc, I.; Ozawa, S.; Ban, T.; Dov, D. Large-scale cyber attacks monitoring using Evolving Cauchy Possibilistic Clustering. Appl. Soft Comput.
**2018**, 62, 592–601. [Google Scholar] [CrossRef] - Sethi, T.S.; Kantardzic, M.; Hu, H. A grid density based framework for classifying streaming data in the presence of concept drift. J. Intell. Inf. Syst.
**2016**, 46, 179–211. [Google Scholar] [CrossRef] - Fahy, C.; Yang, S.; Gongora, M. Ant colony stream clustering: A fast density clustering algorithm for dynamic data streams. IEEE Trans. Cybern.
**2019**, 49, 2215–2228. [Google Scholar] [CrossRef] - Fahy, C.; Yang, S. Finding and Tracking Multi-Density Clusters in Online Dynamic Data Streams. IEEE Trans. Big Data
**2019**. [Google Scholar] [CrossRef] [Green Version] - Bai, L.; Cheng, X.; Liang, J.; Shen, H. An optimization model for clustering categorical data streams with drifting concepts. IEEE Trans. Knowl. Data Eng.
**2016**, 28, 2871–2883. [Google Scholar] [CrossRef] - Amini, A.; Saboohi, H.; Herawan, T.; Wah, T.Y. MuDi-Stream: A multi density clustering algorithm for evolving data stream. J. Netw. Comput. Appl.
**2016**, 59, 370–385. [Google Scholar] [CrossRef] - Huang, G.; Liang, N.; Rong, H.; Saratchandran, P.; Sundararajan, N. On-Line Sequential Extreme Learning Machine Review of Extreme Learning Ma- Proposed Online Sequential Ex- treme Learning Machine. Comput. Intell.
**2005**, 2005, 232–237. [Google Scholar] - Abbas, M.; Albadr, A.; Tiun, S. Extreme learning machine: A review. Int. J. Appl. Eng. Res.
**2017**, 12, 4610–4623. [Google Scholar] - Huang, G.; Huang, G.-B.; Song, S.; You, K. Trends in extreme learning machines: A review. Neural Netw.
**2015**, 61, 32–48. [Google Scholar] [CrossRef] - Akusok, A.; Björk, K.; Miche, Y.; Lendasse, A. High-performance extreme learning machines: A complete toolbox for big data applications. IEEE Access
**2015**, 3, 1011–1025. [Google Scholar] [CrossRef] - Brownfield, B.; Lemos, T.; Kalivas, J.H. Consensus classification using non-optimized classifiers. Anal. Chem.
**2018**, 90, 4429–4437. [Google Scholar] [CrossRef] - Hong, X.; Chen, S.; Harris, C.J. A kernel-based two-class classifier for imbalanced data sets. IEEE Trans. Neural Netw.
**2007**, 18, 28–41. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Joshi, M.V. On Evaluating Performance of Classifiers for Rare Classes. In Proceedings of the 2002 IEEE International Conference on Data Mining, ICDM, Maebashi City, Japan, 9–12 December 2002; pp. 641–644. [Google Scholar]
- Lan, Y.; Wang, Q.; Cole, J.R.; Rosen, G.L. Using the RDP classifier to predict taxonomic novelty and reduce the search space for finding novel organisms. PLoS ONE
**2012**, 7, e32491. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Seliya, N.; Khoshgoftaar, T.M.; Van Hulse, J. A Study on the Relationships of Classifier Performance Metrics. In Proceedings of the 2009 21st IEEE International Conference on Tools with Artificial Intelligence ICTAI, Newark, NJ, USA, 2–4 November 2009; pp. 59–66. [Google Scholar]
- Tavallaee, M.; Bagheri, E.; Lu, W.; Ghorbani, A.A. A Detailed Analysis of the KDD CUP 99 Data Set. In Proceedings of the IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]

**Figure 2.**Classification measures hyper 1 and hyper 2 with/without boosting, MuDi and Cauchy for NSL-KDD, in terms of Accuracy (

**a**), PPV (

**b**), TPR (

**c**), TNR (

**d**), NPV (

**e**), G-mean (

**f**), F-Measured (

**g**).

**Figure 3.**Classification measures hyper 1 and hyper 2 with/without boosting, MuDi and Cauchy for KDD99, in terms of Accuracy (

**a**), PPV (

**b**), TPR (

**c**), TNR (

**d**), NPV (

**e**), G-mean (

**f**), F-Measured (

**g**).

**Figure 4.**Classification measures hyper 1 and hyper 2 with/without boosting, MuDi and Cauchy for LandSat, in terms of Accuracy (

**a**), PPV (

**b**), TPR (

**c**), TNR (

**d**), NPV (

**e**), G-mean (

**f**), F-Measured (

**g**).

Blocks | Input | Output |
---|---|---|

Neural Network (NN) | Data features. | Class assignment. |

Genetic Algorithm (GA) | Error provided from block. Configuration. | Candidate solution for optimizing the error. |

Configuration | Error of class prediction with respect to NN or error of class prediction with respect to ground truth. | One of the two types of errors. |

Core clustering | Data features, genetic solution. | Cluster assignment. |

Class prediction | Core Clustering results, NN predictions. | Classification result. |

If (${\mathrm{w}}_{\mathrm{g}}\left({\mathrm{t}}_{\mathrm{c}}\right)<{\mathrm{T}}_{\mathrm{L}})$ then | // From equation 2 |

Remove (grid_list,g) | // List of all grids that contains micro clusters |

Add (outliers_list,g) | // List of all grid that contains outliers |

Endif | |

If (${\mathrm{w}}_{\mathrm{g}}\left({\mathrm{t}}_{\mathrm{c}}\right)>{\mathrm{T}}_{\mathrm{u}})$ then | |

Add (grid_list,g) | |

Remove (outliers_list,g) | |

Endif |

Data = {xi,yi}, i = 1, 2,…, N | // Online streaming data |

SSBoundary | // Boundary for the random parameters |

numberOfIterations | |

numberOfIterations | |

CrossoverRate | |

ObjectiveFunction = ‘CoreProcess’ | |

NN0 | // The initial NN |

Output | |

Yp = {ypi}, i = 1, 2,…, N | |

Start | |

//Boosting phase - P0 = Random // Initialise random parameters for the core
- NN0 = OSELM(x0,y0)
- GA.Options = (SSBoundary,numberOfIterations, MutationRate,CrossoverRate,ObjectiveFunction, classPrediction(core(P0,X0))-y0)
- P0 = GA.run;
| |

//iterative phase- 5.
- For i = 2 until N
- 6.
- //Prediction
- 7.
- GA.Options = (SSBoundary,numberOfIterations, MutationRate,CrossoverRate,ObjectiveFunction, classPrediction (core(P(i-1),xi))-NN(i-1,xi))
- 8.
- Pi = GA.run;
- 9.
- ypi = classPrediction (core(P(i),xi))
- 10.
- //Correction
- 11.
- NN(i) = OSELM(xi,yi)
| |

End |

Data = {xi,yi}, i=1, 2,…, N, | // Online streaming data |

SSBoundary | // Boundary for the random parameters |

numberOfIterations | |

MutationRate | |

CrossoverRate | |

ObjectiveFunction = ‘CoreProcess’ | |

NN0 | // The initial NN |

Output | |

Yp = {ypi}, i = 1, 2,…, N | |

Start | |

//Boosting phase - P0 = Random // Initialise random parameters for the core
- NN0 = OSELM(x0,y0)
- GA.Options = (SSBoundary,numberOfIterations, MutationRate,CrossoverRate,ObjectiveFunction, classPrediction (core(P0,X0))-y0)
- P0 = GA.run;
| |

//iterative phase - 5.
- For i = 2 until N
- 6.
- //Prediction
- 7.
- GA.Options = (SSBoundary,numberOfIterations, MutationRate,CrossoverRate,ObjectiveFunction, classPrediction (core(P(i-1),xi))-NN(i-1,xi))
- 8.
- Pi = GA.run;
- 9.
- ypi = classPrediction (core(P(i),xi))
- 10.
- //Correction
- 11.
- NN(i) = OSELM(xi,yi)
- 12.
- GA.Options = (SSBoundary,numberOfIterations, MutationRate,CrossoverRate,ObjectiveFunction, classPrediction (core(Pi,Xi))-yi)
- 13.
- Pi = GA.run;
| |

End |

LandSat | 25% | 50% | 75% | 100% | Hyber1 |
---|---|---|---|---|---|

Accuracy | 0.9154 | 0.9257 | 0.9523 | 0.9523 | |

Precision | 0.7939 | 0.8276 | 0.9028 | 0.9102 | |

Recall | 0.7463 | 0.7770 | 0.8568 | 0.8570 | |

Specificity | 0.9451 | 0.9488 | 0.9693 | 0.9544 | |

NPV | 0.9497 | 0.9579 | 0.9631 | 0.9632 | |

G-mean | 0.7698 | 0.8019 | 0.8795 | 0.8832 | |

F-measure | 0.7694 | 0.8015 | 0.8792 | 0.8828 | |

LandSat | 25% | 50% | 75% | 100% | Hyber2 |

Accuracy | 0.9257 | 0.9458 | 0.9608 | 0.9786 | |

Precision | 0.8276 | 0.8679 | 0.8951 | 0.9413 | |

Recall | 0.7770 | 0.8374 | 0.8825 | 0.9357 | |

Specificity | 0.9488 | 0.9581 | 0.9725 | 0.9894 | |

NPV | 0.9579 | 0.9604 | 0.9721 | 0.9877 | |

G-mean | 0.8019 | 0.8525 | 0.8888 | 0.9385 | |

F-measure | 0.8015 | 0.8524 | 0.8888 | 0.9385 |

NSL-KDD | 25% | 50% | 75% | 100% | Hyber1 |
---|---|---|---|---|---|

Accuracy | 0.9918 | 0.9965 | 0.9969 | 0.9972 | |

Precision | 0.9668 | 0.9828 | 0.9866 | 0.9720 | |

Recall | 0.9072 | 0.9567 | 0.9676 | 0.9684 | |

Specificity | 0.9910 | 0.9923 | 0.9937 | 0.9853 | |

NPV | 0.9350 | 0.9692 | 0.9763 | 0.9888 | |

G-mean | 0.9366 | 0.9697 | 0.9770 | 0.9702 | |

F-measure | 0.9361 | 0.9696 | 0.9770 | 0.9702 | |

NSL-KDD | 25% | 50% | 75% | 100% | Hyber2 |

Accuracy | 0.9884 | 0.9906 | 0.9952 | 0.9969 | |

Precision | 0.9536 | 0.9548 | 0.9601 | 0.9649 | |

Recall | 0.8777 | 0.9016 | 0.9496 | 0.9671 | |

Specificity | 0.9833 | 0.9836 | 0.9836 | 0.9806 | |

NPV | 0.9216 | 0.9385 | 0.9741 | 0.9869 | |

G-mean | 0.9149 | 0.9278 | 0.9549 | 0.9660 | |

F-measure | 0.9141 | 0.9274 | 0.9549 | 0.9660 |

**Table 7.**T-test values for the comparison of the accuracies of the HH-F variants with MuDi and Cauchy.

T-Test | Hyper 1 | Hyper 2 |
---|---|---|

Cauchy | 1.59896 × 10^{−25} | 1.1281 × 10^{−128} |

MuDi | 7.64165 × 10^{−16} | 7.94039 × 10^{−16} |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Adnan, A.; Muhammed, A.; Abd Ghani, A.A.; Abdullah, A.; Hakim, F.
Hyper-Heuristic Framework for Sequential Semi-Supervised Classification Based on Core Clustering. *Symmetry* **2020**, *12*, 1292.
https://doi.org/10.3390/sym12081292

**AMA Style**

Adnan A, Muhammed A, Abd Ghani AA, Abdullah A, Hakim F.
Hyper-Heuristic Framework for Sequential Semi-Supervised Classification Based on Core Clustering. *Symmetry*. 2020; 12(8):1292.
https://doi.org/10.3390/sym12081292

**Chicago/Turabian Style**

Adnan, Ahmed, Abdullah Muhammed, Abdul Azim Abd Ghani, Azizol Abdullah, and Fahrul Hakim.
2020. "Hyper-Heuristic Framework for Sequential Semi-Supervised Classification Based on Core Clustering" *Symmetry* 12, no. 8: 1292.
https://doi.org/10.3390/sym12081292