Divide-and-Merge Parallel Hierarchical Ensemble DNNs with Local Knowledge Augmentation
Abstract
1. Introduction
- (1)
- As a deep ensemble scheme of layer-by-layer architecture, the model tends to achieve a more comprehensive and better representation of the original input. The deep hierarchical representation ensures that valuable knowledge is effectively captured and not abandoned. The model adopts a lightweight DNN as a basic building block, whose parameters do not vary; therefore, it can be trained quickly.
- (2)
- The deep ensemble model takes advantage of the predictions from all previous levels to enhance generalization performance, which conforms to the theory of stacked generalization [25], and thus open the manifold structure of the original input space. Through this proposed stacking architecture, the augmented feature progressively moves away from the original manifold in a serial and parallel manner, such that enhanced classification performance improvement can be achieved.
- (3)
- The knowledge augmentation strategy in local data partitions of each level enables valuable information to be preserved and knowledge to be supplemented for the original input. The proposed model is highly parallelizable since all learners at the same level can be implemented in parallel on various CPU computing nodes.
2. Related Work
3. Proposed Method
3.1. Basic Building Block
3.1.1. Autoencoder
3.1.2. Other Variants of Autoencoder
3.2. The Proposed Architecture of PH-E-DNN
3.3. Knowledge Augmentation Based on Regional Subsets
Algorithm 1: Basic building block: Deep neural network with autoencoder |
Require: Input data. Ensure: The actual output label of each basic building block. 1: Perform corrupting process to obtain the input . 2: Perform encoding and decoding, and achieve the output of the network. 3: Calculate the objective function in Equation (8). 4: Finetune the parameters (W, b) with back propagation algorithm. 5: Repeat Step 2 to Step 4 until the cost function converges. |
Algorithm 2: PH-E-DNN |
Require: Input feature matrix and corresponding label T, the number of sub-models in each level L, the number of hidden units, the epoch, and batchsize Ensure: Fused decision output. 1: Initialization: Use FCM algorithm to divide total set into multiple regional subsets 2: Parallel training of sub-models: Then call Algorithm 1 to train several DNNs in parallel with these subsets, and the classification label can be determined. 3: Creation of augmented data: Consider the actual classification label as the created knowledge and concatenate this new feature with previous input. The feature fusion is formed as . Collect all local augmented data, we can derive new total dataset . K denotes the new feature learned by the architecture, A represents the combined features integrating both the new feature and the previous input, and D refers to the new features aggregated from all subsets of regions. 4: Repeat the process similar to the first local knowledge augmentation. The augmented global dataset can be written as . 5: Repeat the process similar to the first local knowledge augmentation. So the final global dataset can be expressed by . 6: Parallel learning of sub-models: Call Algorithm 1 on global dataset in parallel to achieve individual result . 7: Final decision: According to majority vote, determine the class of testing sample. |
4. Experiments
4.1. Benchmark Datasets
4.2. Experimental Setup
4.3. Experimental Analysis and Comparison of Results
5. Case Study
5.1. Datasets
5.2. Comparison with Other Approaches
5.3. Discussion and Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Kim, J.; Nguyen, A.D.; Lee, S. Deep CNN-based blind image quality predictor. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 11–24. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Lim, P.; Qin, A.K.; Tan, K.C. Multiobjective deep belief networks ensemble for remaining useful life estimation in prognostics. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2306–2318. [Google Scholar] [CrossRef]
- Zhu, L.; Hill, D.J.; Lu, C. Intelligent short-term voltage stability assessment via spatial attention rectified RNN learning. IEEE Trans. Ind. Inform. 2021, 17, 7005–7016. [Google Scholar] [CrossRef]
- Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
- Deng, L.; Yu, D. Deep learning: Methods and applications. Found. Trends Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef]
- Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
- Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
- Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A.; Bottou, L. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
- Xu, J.; Xiang, L.; Liu, Q.; Gilmore, H.; Wu, J.; Tang, J.; Madabhushi, A. Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans. Med. Imaging 2016, 35, 119–130. [Google Scholar] [CrossRef] [PubMed]
- D’Angelo, G.; Palmieri, F. A stacked autoencoder-based convolutional and recurrent deep neural network for detecting cyberattacks in interconnected power control systems. Int. J. Intell. Syst. 2021, 36, 7080–7102. [Google Scholar] [CrossRef]
- Zeng, K.; Yu, J.; Wang, R.; Li, C.; Tao, D. Coupled deep autoencoder for single image super-resolution. IEEE Trans. Cybern. 2017, 47, 27–37. [Google Scholar] [CrossRef]
- Graves, A.; Mohamed Ar Hinton, G. Speech recognition with deep recurrent neural networks. In International Conference on Acoustics, Speech and Signal Processing; IEEE: Piscataway, NJ, USA, 2013; pp. 6645–6649. [Google Scholar]
- Zhang, H.; Wang, Z.; Liu, D. A comprehensive review of stability analysis of continuous-time recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1229–1262. [Google Scholar] [CrossRef]
- Chen, C.L.P.; Liu, Z. Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 10–24. [Google Scholar] [CrossRef]
- Chen, C.L.P.; Liu, Z.; Feng, S. Universal approximation capability of broad learning system and its structural variations. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1191–1204. [Google Scholar] [CrossRef]
- Chen, Z.; Wu, M.; Gao, K.; Wu, J.; Ding, J.; Zeng, Z.; Li, X. A novel ensemble deep learning approach for sleep-wake detection using heart rate variability and acceleration. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 5, 803–812. [Google Scholar] [CrossRef]
- Zheng, J.; Cao, X.; Zhang, B.; Zhen, X.; Su, X. Deep ensemble machine for video classification. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 553–565. [Google Scholar] [CrossRef] [PubMed]
- Sun, Q.; Ge, Z. Gated stacked target-related autoencoder: A novel deep feature extraction and layerwise ensemble method for industrial soft sensor application. IEEE Trans. Cybern. 2022, 52, 3457–3468. [Google Scholar] [CrossRef] [PubMed]
- Gou, J.; He, X.; Du, L.; Yu, B.; Chen, W.; Yi, Z. Hierarchical Locality-Aware Deep Dictionary Learning for Classification. IEEE Trans. Multimed. 2024, 26, 447–461. [Google Scholar] [CrossRef]
- Zhang, W.; Wu, Q.M.J.; Yang, Y.; Akilan, T.; Zhang, H. A width-growth model with subnetwork nodes and refinement structure for representation learning and image classification. IEEE Trans. Ind. Inform. 2021, 17, 1562–1572. [Google Scholar] [CrossRef]
- Duan, M.; Li, K.; Liao, X.; Li, K. A parallel multiclassification algorithm for big data using an extreme learning machine. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2337–2351. [Google Scholar] [CrossRef]
- Lin, K.P. A novel evolutionary kernel intuitionistic fuzzy c-means clustering algorithm. IEEE Trans. Fuzzy Syst. 2014, 22, 1074–1087. [Google Scholar] [CrossRef]
- Narayanan, S.J.; Baby, C.J.; Perumal, B.; Bhatt, R.B.; Cheng, X.; Ghalib, M.R.; Shankar, A. Fuzzy decision trees embedded with evolutionary fuzzy clustering for locating users using wireless signal strength in an indoor environment. Int. J. Intell. Syst. 2021, 36, 4280–4297. [Google Scholar] [CrossRef]
- Zhang, X.; Nojima, Y.; Ishibuchi, H.; Hu, W.; Wang, S. Prediction by fuzzy clustering and KNN on validation data with parallel ensemble of interpretable TSK fuzzy classifiers. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 400–414. [Google Scholar] [CrossRef]
- Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
- Zhang, L.; Zhou, W.; Jiao, L. Wavelet support vector machine. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2004, 34, 34–39. [Google Scholar] [CrossRef] [PubMed]
- Oskoei, M.A.; Hu, H. Support vector machine-based classification scheme for myoelectric control applied to upper limb. IEEE Trans. Biomed. Eng. 2008, 55, 1956–1965. [Google Scholar] [CrossRef] [PubMed]
- Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2012, 42, 513–529. [Google Scholar] [CrossRef] [PubMed]
- Liang, N.Y.; Huang, G.B.; Saratchandran, P.; Sundararajan, N. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 2006, 17, 1411–1423. [Google Scholar] [CrossRef]
- Zhao, P.; Fang, J.; Jie, C.; Zhang, J.; Wang, E.; Zhang, S. Multiscale Deep Learning Reparameterized Full Waveform Inversion With the Adjoint Method. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–12. [Google Scholar] [CrossRef]
- Wang, B.; Pineau, J. Online bagging and boosting for imbalanced data streams. IEEE Trans. Knowl. Data Eng. 2016, 28, 3353–3366. [Google Scholar] [CrossRef]
- Deng, L.; Yu, D. Deep convex net: A scalable architecture for speech pattern classification. In Twelfth Annual Conference of the International Speech Communication Association; ISCA: Florence, Italy, 2011; pp. 2285–2288. [Google Scholar]
- Wang, G.; Zhang, G.; Choi, K.S.; Lu, J. Deep additive least squares support vector machines for classification with model transfer. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1527–1540. [Google Scholar] [CrossRef]
- Zhou, T.; Chung, F.; Wang, S. Deep TSK fuzzy classifier with stacked generalization and triplely concise interpreta-bility guarantee for large data. IEEE Trans. Fuzzy Syst. 2017, 25, 1207–1221. [Google Scholar] [CrossRef]
- Li, D.; Chi, Z.; Wang, B.; Wang, Z.; Yang, H.; Du, W. Entropy-based hybrid sampling ensemble learning for imbalanced data. Int. J. Intell. Syst. 2021, 36, 3039–3067. [Google Scholar]
- Bai, Z.; Huang, G.B.; Wang, D.; Wang, H.; Westover, M.B. Sparse extreme learning machine for classification. IEEE Trans. Cybern. 2014, 44, 1858–1870. [Google Scholar] [CrossRef]
- Fan, B.; Lu, X.; Li, H.X. Probabilistic inference-based least squares support vector machine for modeling under noisy environment. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 1703–1710. [Google Scholar] [CrossRef]
- Razzak, I.; Blumenstein, M.; Xu, G. Multiclass support matrix machines by maximizing the inter-class margin for single trial EEG classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1117–1127. [Google Scholar] [CrossRef]
- Sun, S.; Dong, Z.; Zhao, J. Conditional random fields for multiview sequential data modeling. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 1242–1253. [Google Scholar] [CrossRef] [PubMed]
- Bache, K.; Lichman, M. UCI Machine Learning Repository; University California, School of Information and Computer Science: Irvine, CA, USA, 2013; Available online: http://archive.ics.uci.edu/ml (accessed on 28 July 2024).
- Derrac, J.; Garcia, S.; Sanchez, L.; Herrera, F. Keel data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework. J. Mult.-Valued Log. Soft Comput. 2011, 17, 255–287. [Google Scholar]
- Li, Y.; Zhu, Q.; Liu, Z. Deep Learning for Image Reconstruction in Electrical Tomography: A Review. IEEE Sens. J. 2025, 25, 14522–14538. [Google Scholar] [CrossRef]
Dataset | Samples | Features | Classes |
---|---|---|---|
Winequality (WIN) | 4898 | 11 | 7 |
Waveform3 (WAV) | 5000 | 21 | 3 |
Pageblock (PAG) | 5472 | 10 | 5 |
Optdigits (OPT) | 5620 | 64 | 10 |
Satimage (SAT) | 6435 | 36 | 6 |
Pendigits (PEND) | 7494 | 16 | 10 |
Mushroom (MUS) | 8124 | 21 | 2 |
Penbased (PENB) | 10,992 | 16 | 10 |
Nursery (NUR) | 12,960 | 8 | 5 |
Magic (MAG) | 19,020 | 10 | 2 |
Letter (LET) | 20,000 | 16 | 26 |
Adult (ADU) | 48,841 | 14 | 2 |
Shuttle (SHU) | 57,999 | 10 | 7 |
Connect-4 (CON) | 67,557 | 42 | 3 |
ARWS (ARWS) | 75,128 | 8 | 4 |
HARS (HARS) | 10,299 | 561 | 6 |
Dataset | Hidden Units | Epochs | Batchsize |
---|---|---|---|
WIN | 20 | 50 | 10 |
WAV | 30 | 30 | 10 |
PAG | 30 | 50 | 10 |
OPT | 30 | 30 | 10 |
SAT | 30 | 50 | 10 |
PEND | 30 | 50 | 10 |
MUS | 30 | 50 | 20 |
PENB | 30 | 50 | 10 |
NUR | 20 | 100 | 10 |
MAG | 30 | 60 | 20 |
LET | 20 | 50 | 10 |
ADU | 40 | 40 | 20 |
SHU | 10 | 50 | 100 |
CON | 50 | 50 | 60 |
ARWS | 20 | 50 | 100 |
HARS | 50 | 50 | 60 |
Dataset | Training Accuracy | Testing Accuracy | Training Time | Testing Time |
---|---|---|---|---|
WIN | 0.5415 (0.0070) | 0.5358 (0.0152) | 2.5350 (0.1092) | 0.0012 (0.0005) |
WAV | 0.8681 (0.0068) | 0.8605 (0.0103) | 1.9949 (0.0561) | 0.0012 (0.0005) |
PAG | 0.9519 (0.0045) | 0.9475 (0.0069) | 3.3017 (0.0481) | 0.0105 (0.0416) |
OPT | 0.9905 (0.0012) | 0.9782 (0.0059) | 2.8216 (0.0186) | 0.0018 (0.0004) |
SAT | 0.8937 (0.0046) | 0.8832 (0.0109) | 4.8640 (0.1638) | 0.0020 (0.0009) |
PEND | 0.9686 (0.0169) | 0.9626 (0.0188) | 4.5772 (0.0290) | 0.0015 (0.0005) |
MUS | 0.9986 (0.0026) | 0.9983 (0.0033) | 5.3348 (0.2391) | 0.0017 (0.0007) |
PENB | 0.9573 (0.0130) | 0.9546 (0.0135) | 6.6304 (0.0811) | 0.0021 (0.0005) |
NUR | 0.9708 (0.0036) | 0.9681 (0.0051) | 12.2063 (0.0828) | 0.0012 (0.0002) |
MAG | 0.8561 (0.0052) | 0.8536 (0.0089) | 7.2087 (0.0790) | 0.0022 (0.0005) |
LET | 0.7933 (0.0080) | 0.7886 (0.0100) | 11.4005 (0.0992) | 0.0030 (0.0004) |
ADU | 0.8537 (0.0027) | 0.8515 (0.0029) | 14.5649 (0.4329) | 0.0079 (0.0010) |
SHU | 0.9720 (0.0154) | 0.9718 (0.0150) | 5.9225 (0.1595) | 0.0036 (0.0007) |
CON | 0.7763 (0.0077) | 0.7691 (0.0063) | 26.0889 (0.9251) | 0.0138 (0.0011) |
ARWS | 0.9458 (0.0190) | 0.9451 (0.0185) | 6.2372 (0.0307) | 0.0032 (0.0005) |
HARS | 0.9843 (0.0057) | 0.9761 (0.0059) | 29.1670 (0.2035) | 0.0091 (0.0007) |
Dataset | Knowledge Augmentation in Level 1 | Knowledge Augmentation in Level 2 | Knowledge Augmentation in Level 3 |
---|---|---|---|
WIN | 1.31 (0.25) | 1.53 (0.23) | 1.62 (0.25) |
WAV | 1.02 (0.20) | 1.07 (0.13) | 0.96 (0.09) |
PAG | 1.82 (0.40) | 1.99 (0.41) | 2.46 (0.39) |
OPT | 1.68 (0.25) | 1.61 (0.12) | 1.75 (0.12) |
SAT | 2.30 (0.18) | 2.33 (0.04) | 2.42 (0.13) |
PEND | 2.07 (0.37) | 2.21 (0.37) | 2.60 (0.18) |
MUS | 2.54 (0.18) | 2.43 (0.06) | 2.96 (0.12) |
PENB | 3.10 (0.50) | 3.91 (0.36) | 4.12 (0.12) |
NUR | 6.84 (0.22) | 4.80 (0.14) | 4.84 (0.15) |
MAG | 4.42 (0.45) | 4.70 (1.10) | 5.16 (1.10) |
LET | 5.41 (0.50) | 5.25 (0.36) | 5.63 (0.32) |
ADU | 8.71 (0.22) | 9.16 (0.61) | 9.42 (0.76) |
SHU | 4.91 (0.45) | 6.00 (0.52) | 6.39 (0.48) |
CON | 16.36 (0.50) | 17.12 (1.56) | 16.78 (0.53) |
ARWS | 5.03 (0.70) | 4.50 (0.52) | 4.58 (0.59) |
HARS | 25.04 (1.71) | 24.37 (1.24) | 24.73 (2.08) |
Dataset | Knowledge Augmentation in Level 1 | Knowledge Augmentation in Level 2 | Knowledge Augmentation in Level 3 |
---|---|---|---|
WIN | 1.05 (0.36) | 1.07 (0.14) | 1.13 (0.13) |
WAV | 0.95 (0.28) | 0.81 (0.10) | 0.88 (0.11) |
PAG | 1.80 (0.34) | 1.73 (0.13) | 1.71 (0.13) |
OPT | 1.64 (0.25) | 1.61 (0.23) | 1.65 (0.25) |
SAT | 1.84 (0.40) | 1.73 (0.21) | 1.68 (0.22) |
PEND | 1.70 (0.25) | 1.69 (0.21) | 1.64 (0.17) |
MUS | 1.64 (0.30) | 1.76 (0.12) | 1.80 (0.20) |
PENB | 2.52 (0.28) | 2.40 (0.18) | 2.42 (0.17) |
NUR | 4.20 (0.32) | 5.07 (0.19) | 5.21 (0.16) |
MAG | 3.35 (0.55) | 3.33 (0.45) | 3.48 (0.50) |
LET | 4.05 (0.36) | 4.35 (0.40) | 4.03 (0.47) |
ADU | 9.28 (1.41) | 7.60 (0.92) | 7.76 (1.17) |
SHU | 4.46 (0.63) | 5.15 (1.08) | 5.99 (0.77) |
CON | 17.36 (1.01) | 18.90 (1.97) | 19.31 (1.84) |
ARWS | 3.85 (0.65) | 4.37 (0.84) | 4.38 (0.50) |
HARS | 23.85 (2.47) | 23.65 (2.12) | 24.59 (3.98) |
L = 3 | L = 5 | |||
---|---|---|---|---|
Training | Testing | Training | Testing | |
WIN | 0.5545 (0.0059) | 0.5439 (0.0142) | 0.5550 (0.0047) | 0.5493 (0.0140) |
WAV | 0.8749 (0.0037) | 0.8674 (0.0082) | 0.8730 (0.0031) | 0.8690 (0.0107) |
PAG | 0.9516 (0.0031) | 0.9504 (0.0063) | 0.9513 (0.0013) | 0.9508 (0.0076) |
OPT | 0.9931 (0.0011) | 0.9883 (0.0033) | 0.9928 (0.0010) | 0.9861 (0.0023) |
SAT | 0.8954 (0.0043) | 0.8894 (0.0084) | 0.8975 (0.0035) | 0.8887 (0.0092) |
PEND | 0.9812 (0.0162) | 0.9797 (0.0160) | 0.9778 (0.0137) | 0.9763 (0.0148) |
MUS | 0.9999 (0.0002) | 0.9998 (0.0003) | 1 (0) | 1 (0.0001) |
PENB | 0.9807 (0.0179) | 0.9785 (0.0191) | 0.9837 (0.0128) | 0.9825 (0.0135) |
NUR | 0.9893 (0.0035) | 0.9883 (0.0039) | 0.9863 (0.0032) | 0.9854 (0.0032) |
MAG | 0.8602 (0.0026) | 0.8570 (0.0039) | 0.8612 (0.0018) | 0.8587 (0.0051) |
LET | 0.8282 (0.0043) | 0.8242 (0.0073) | 0.8356 (0.0047) | 0.8329 (0.0056) |
ADU | 0.8557 (0.0015) | 0.8528 (0.0027) | 0.8564 (0.0011) | 0.8534 (0.0034) |
SHU | 0.9807 (0.0045) | 0.9803 (0.0049) | 0.9814 (0.0047) | 0.9812 (0.0044) |
CON | 0.7836 (0.0025) | 0.7782 (0.0016) | 0.7857 (0.0031) | 0.7820 (0.0061) |
ARWS | 0.9573 (0.0056) | 0.9572 (0.0063) | 0.9539 (0.0036) | 0.9534 (0.0037) |
HARS | 0.9891 (0.0008) | 0.9879 (0.0016) | 0.9881 (0.0022) | 0.9840 (0.0025) |
Dataset | Adaboost | Bagging | SVM | SAE | SAE2 | PH-DNN | E-DNN | DBN | PH-E-DBN | PH-E-DNN |
---|---|---|---|---|---|---|---|---|---|---|
WIN | 0.4609 | 0.5212 | 0.5343 | 0.5358 | 0.5288 | 0.5411 | 0.5316 | 0.5330 | 0.5387 | 0.5493 |
(0.0140) | (0.0104) | (0.0127) | (0.0152) | (0.0196) | (0.0217) | (0.0218) | (0.0189) | (0.0139) | (0.0140) | |
WAV | 0.8153 | 0.8580 | 0.8643 | 0.8605 | 0.8591 | 0.8634 | 0.8646 | 0.8623 | 0.8648 | 0.8690 |
(0.0122) | (0.0092) | (0.0100) | (0.0103) | (0.0125) | (0.0111) | (0.0116) | (0.0095) | (0.0070) | (0.0107) | |
PAG | 0.9350 | 0.9459 | 0.9537 | 0.9475 | 0.9420 | 0.9490 | 0.9497 | 0.9516 | 0.9506 | 0.9508 |
(0.0031) | (0.0067) | (0.0063) | (0.0069) | (0.0169) | (0.0078) | (0.0070) | (0.0078) | (0.0045) | (0.0076) | |
OPT | 0.7222 | 0.9519 | 0.9762 | 0.9782 | 0.9831 | 0.9793 | 0.9818 | 0.9815 | 0.9900 | 0.9883 |
(0.0201) | (0.0061) | (0.0046) | (0.0059) | (0.0036) | (0.0056) | (0.0039) | (0.0042) | (0.0032) | (0.0033) | |
SAT | 0.7929 | 0.8414 | 0.8673 | 0.8832 | 0.8760 | 0.8800 | 0.8867 | 0.8742 | 0.8866 | 0.8894 |
(0.0094) | (0.0085) | (0.0063) | (0.0109) | (0.0122) | (0.0097) | (0.0081) | (0.0281) | (0.0124) | (0.0084) | |
PEND | 0.6827 | 0.8874 | 0.9872 | 0.9626 | 0.9645 | 0.9885 | 0.9662 | 0.9406 | 0.9660 | 0.9797 |
(0.0125) | (0.0073) | (0.0027) | (0.0188) | (0.0169) | (0.0032) | (0.0145) | (0.0110) | (0.0175) | (0.0160) | |
MUS | 0.9986 | 0.9319 | 1 | 0.9983 | 0.9988 | 0.9989 | 0.9996 | 0.9994 | 0.9997 | 1 |
(0.0010) | (0.0070) | (0) | (0.0033) | (0.0029) | (0.0014) | (0.0017) | (0.0014) | (0.0005) | (0.0001) | |
PENB | 0.6888 | 0.8794 | 0.9759 | 0.9546 | 0.9600 | 0.9862 | 0.9641 | 0.9470 | 0.9519 | 0.9825 |
(0.0108) | (0.0043) | (0.0025) | (0.0135) | (0.0156) | (0.0047) | (0.0155) | (0.0106) | (0.0108) | (0.0135) | |
NUR | 0.8278 | 0.7956 | 0.9853 | 0.9681 | 0.9571 | 0.9865 | 0.9702 | 0.9651 | 0.9689 | 0.9883 |
(0.0050) | (0.0053) | (0.0022) | (0.0051) | (0.0147) | (0.0049) | (0.0043) | (0.0054) | (0.0053) | (0.0039) | |
MAG | 0.7547 | 0.7834 | 0.8322 | 0.8536 | 0.8438 | 0.8551 | 0.8547 | 0.8527 | 0.8533 | 0.8587 |
(0.0051) | (0.0060) | (0.0053) | (0.0089) | (0.0115) | (0.0055) | (0.0139) | (0.0053) | (0.0065) | (0.0051) | |
LET | 0.4590 | 0.7009 | 0.8203 | 0.7886 | 0.8024 | 0.8234 | 0.7981 | 0.8193 | 0.8556 | 0.8329 |
(0.0061) | (0.0076) | (0.0080) | (0.0100) | (0.0138) | (0.0082) | (0.0068) | (0.0060) | (0.0083) | (0.0056) | |
ADU | 0.8319 | 0.8318 | 0.8352 | 0.8515 | 0.8453 | 0.8512 | 0.8518 | 0.8488 | 0.8505 | 0.8534 |
(0.0017) | (0.0033) | (0.0028) | (0.0029) | (0.0042) | (0.0068) | (0.0033) | (0.0046) | (0.0035) | (0.0034) | |
SHU | 0.9035 | 0.9436 | 0.9747 | 0.9718 | 0.9709 | 0.9766 | 0.9737 | 0.9882 | 0.9855 | 0.9812 |
(0.0027) | (0.0026) | (0.0012) | (0.0150) | (0.0117) | (0.0016) | (0.0037) | (0.0089) | (0.0043) | (0.0044) | |
CON | 0.6596 | 0.6607 | 0.7477 | 0.7691 | 0.7652 | 0.7661 | 0.7797 | 0.7044 | 0.7262 | 0.7820 |
(0.0043) | (0.0049) | (0.0040) | (0.0063) | (0.0069) | (0.0045) | (0.0054) | (0.0147) | (0.0065) | (0.0061) | |
ARWS | 0.8973 | 0.9091 | 0.9704 | 0.9451 | 0.9659 | 0.9600 | 0.9534 | 0.9709 | 0.9693 | 0.9572 |
(0.0028) | (0.0018) | (0.0015) | (0.0185) | (0.0064) | (0.0044) | (0.0072) | (0.0013) | (0.0024) | (0.0063) | |
HARS | 0.5326 | 0.9810 | 0.9758 | 0.9761 | 0.9845 | 0.9797 | 0.9804 | 0.9633 | 0.9824 | 0.9879 |
(0.0129) | (0.0023) | (0.0025) | (0.0059) | (0.0062) | (0.0046) | (0.0036) | (0.0471) | (0.0111) | (0.0016) |
Dataset | Adaboost | Bagging | SVM | SAE | SAE2 | PH-DNN | E-DNN | DBN | PH-E-DBN | PH-E-DNN |
---|---|---|---|---|---|---|---|---|---|---|
WIN | 0.3518 | 0.5200 | 0.4296 | 0.4977 | 0.5122 | 0.5136 | 0.5108 | 0.4936 | 0.5166 | 0.5250 |
(0.0700) | (0.0202) | (0.0124) | (0.0104) | (0.0075) | (0.0199) | (0.0101) | (0.0155) | (0.0200) | (0.0249) | |
WAV | 0.8055 | 0.8170 | 0.8663 | 0.8591 | 0.8600 | 0.8621 | 0.8695 | 0.8640 | 0.8681 | 0.8731 |
(0.0163) | (0.0136) | (0.0213) | (0.0167) | (0.0149) | (0.0019) | (0.0142) | (0.0085) | (0.0053) | (0.0114) | |
PAG | 0.9163 | 0.9574 | 0.9003 | 0.9355 | 0.9555 | 0.9338 | 0.8695 | 0.9343 | 0.9511 | 0.9443 |
(0.0055) | (0.0056) | (0.0122) | (0.0082) | (0.0020) | (0.0111) | (0.0142) | (0.0059) | (0.0007) | (0.0076) | |
OPT | 0.7076 | 0.9604 | 0.9849 | 0.9803 | 0.9861 | 0.9774 | 0.9755 | 0.9833 | 0.9836 | 0.9890 |
(0.0109) | (0.0036) | (0.0039) | (0.0023) | (0.0021) | (0.0054) | (0.0037) | (0.0031) | (0.0045) | (0.0026) | |
SAT | 0.7859 | 0.8743 | 0.8697 | 0.8720 | 0.8850 | 0.8814 | 0.8736 | 0.8670 | 0.8713 | 0.8861 |
(0.0106) | (0.0107) | (0.0052) | (0.0147) | (0.0172) | (0.0132) | (0.0179) | (0.0225) | (0.0080) | (0.0089) | |
PEND | 0.6618 | 0.9635 | 0.9692 | 0.9599 | 0.9677 | 0.9877 | 0.9531 | 0.9339 | 0.9483 | 0.9778 |
(0.0111) | (0.0051) | (0.0032) | (0.0166) | (0.0216) | (0.0024) | (0.0030) | (0.0065) | (0.0082) | (0.0200) | |
MUS | 0.9989 | 0.9986 | 0.9647 | 0.9999 | 1 | 0.9991 | 1 | 0.9996 | 1(0) | 1 |
(0.0008) | (0.0015) | (0.0034) | (0.0003) | (0) | (0.0008) | (0) | (0.0008) | (0) | (0) | |
PENB | 0.6618 | 0.9664 | 0.9743 | 0.9648 | 0.9933 | 0.9801 | 0.9636 | 0.9457 | 0.9405 | 0.9879 |
(0.0083) | (0.0024) | (0.0057) | (0.0336) | (0.0028) | (0.0066) | (0.0164) | (0.0050) | (0.0025) | (0.0082) | |
NUR | 0.8168 | 0.9547 | 0.8328 | 0.9604 | 0.9622 | 0.9872 | 0.9608 | 0.9541 | 0.9468 | 0.9896 |
(0.0072) | (0.0006) | (0.0096) | (0.0022) | (0.0024) | (0.0017) | (0.0038) | (0.0058) | (0.0124) | (0.0040) | |
MAG | 0.8398 | 0.8499 | 0.8100 | 0.8462 | 0.8490 | 0.8521 | 0.8576 | 0.8483 | 0.8442 | 0.8589 |
(0.0058) | (0.0030) | (0.0042) | (0.0149) | (0.0092) | (0.0040) | (0.0014) | (0.0074) | (0.0038) | (0.0057) | |
LET | 0.1255 | 0.8138 | 0.8253 | 0.7877 | 0.8237 | 0.8246 | 0.8096 | 0.8158 | 0.8584 | 0.8272 |
(0.0192) | (0.0038) | (0.0077) | (0.0105) | (0.0101) | (0.0071) | (0.0092) | (0.0040) | (0.0065) | (0.0073) | |
ADU | 0.8497 | 0.8285 | 0.8337 | 0.8439 | 0.8479 | 0.8436 | 0.8447 | 0.8466 | 0.8434 | 0.8487 |
(0.0042) | (0.0027) | (0.0042) | (0.0054) | (0.0082) | (0.0036) | (0.0041) | (0.0037) | (0.0005) | (0.0043) | |
SHU | 0.9930 | 0.9982 | 0.9662 | 0.9710 | 0.9709 | 0.9749 | 0.9749 | 0.9843 | 0.9916 | 0.9763 |
(0.0008) | (0.0002) | (0.0012) | (0.0050) | (0.0070) | (0.0074) | (0.0062) | (0.0072) | (0.0034) | (0.0086) | |
CON | 0.5203 | 0.6993 | 0.7298 | 0.7283 | 0.7299 | 0.7121 | 0.7367 | 0.6921 | 0.6798 | 0.7403 |
(0.0029) | (0.0032) | (0.0026) | (0.0125) | (0.0026) | (0.0044) | (0.0059) | (0.0174) | (0.0030) | (0.0048) | |
ARWS | 0.8564 | 0.9828 | 0.8812 | 0.9657 | 0.9660 | 0.9620 | 0.9688 | 0.9683 | 0.9685 | 0.9636 |
(0.0047) | (0.0017) | (0.0029) | (0.0022) | (0.0014) | (0.0033) | (0.0019) | (0.0017) | (0.0019) | (0.0016) | |
HARS | 0.4052 | 0.8739 | 0.9786 | 0.9811 | 0.9812 | 0.9767 | 0.9845 | 0.9720 | 0.9783 | 0.9854 |
(0.0053) | (0.0060) | (0.0034) | (0.0032) | (0.0029) | (0.0046) | (0.0029) | (0.0195) | (0.0028) | (0.0036) |
Dataset | Metrics | CNN | DBN | SAE | SAE2 | PH-E-DNN |
---|---|---|---|---|---|---|
MNIST | Accuracy | 0.9671 | 0.9748 | 0.9744 | 0.9794 | 0.9849 |
Time | 2282.22 | 521.90 | 688.99 | 1178.06 | 1260.03 | |
Fashion-MNIST | Accuracy | 0.8704 | 0.8775 | 0.8938 | 0.8963 | 0.9378 |
Time | 5849.53 | 726.5254 | 713.507 | 1266.53 | 1293.38 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, Z.; Dong, S.; Liu, K.; Zhou, J.; Zhang, X. Divide-and-Merge Parallel Hierarchical Ensemble DNNs with Local Knowledge Augmentation. Symmetry 2025, 17, 1362. https://doi.org/10.3390/sym17081362
Jiang Z, Dong S, Liu K, Zhou J, Zhang X. Divide-and-Merge Parallel Hierarchical Ensemble DNNs with Local Knowledge Augmentation. Symmetry. 2025; 17(8):1362. https://doi.org/10.3390/sym17081362
Chicago/Turabian StyleJiang, Zhibin, Shuai Dong, Kaining Liu, Jie Zhou, and Xiongtao Zhang. 2025. "Divide-and-Merge Parallel Hierarchical Ensemble DNNs with Local Knowledge Augmentation" Symmetry 17, no. 8: 1362. https://doi.org/10.3390/sym17081362
APA StyleJiang, Z., Dong, S., Liu, K., Zhou, J., & Zhang, X. (2025). Divide-and-Merge Parallel Hierarchical Ensemble DNNs with Local Knowledge Augmentation. Symmetry, 17(8), 1362. https://doi.org/10.3390/sym17081362