Next Article in Journal
Enriched Multivalued Contractions with Applications to Differential Inclusions and Dynamic Programming
Next Article in Special Issue
StyleGANs and Transfer Learning for Generating Synthetic Images in Industrial Applications
Previous Article in Journal
Hemifield-Specific Rotational Biases during the Observation of Ambiguous Human Silhouettes
Previous Article in Special Issue
A New Feature Selection Method Based on a Self-Variant Genetic Algorithm Applied to Android Malware Detection
Article

An Improved Bees Algorithm for Training Deep Recurrent Networks for Sentiment Classification

1
Department of Computer Engineering, Fatih Sultan Mehmet Vakif University, Istanbul 34445, Turkey
2
Department of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK
3
Department of Biomedical Engineering, Fatih Sultan Mehmet Vakif University, Istanbul 34445, Turkey
4
Department of Mathematical Engineering, Yildiz Technical University, Istanbul 34220, Turkey
*
Author to whom correspondence should be addressed.
Academic Editors: Kóczy T. László and István A. Harmati
Symmetry 2021, 13(8), 1347; https://doi.org/10.3390/sym13081347
Received: 8 July 2021 / Revised: 21 July 2021 / Accepted: 22 July 2021 / Published: 26 July 2021
(This article belongs to the Special Issue Computational Intelligence and Soft Computing: Recent Applications)
Recurrent neural networks (RNNs) are powerful tools for learning information from temporal sequences. Designing an optimum deep RNN is difficult due to configuration and training issues, such as vanishing and exploding gradients. In this paper, a novel metaheuristic optimisation approach is proposed for training deep RNNs for the sentiment classification task. The approach employs an enhanced Ternary Bees Algorithm (BA-3+), which operates for large dataset classification problems by considering only three individual solutions in each iteration. BA-3+ combines the collaborative search of three bees to find the optimal set of trainable parameters of the proposed deep recurrent learning architecture. Local learning with exploitative search utilises the greedy selection strategy. Stochastic gradient descent (SGD) learning with singular value decomposition (SVD) aims to handle vanishing and exploding gradients of the decision parameters with the stabilisation strategy of SVD. Global learning with explorative search achieves faster convergence without getting trapped at local optima to find the optimal set of trainable parameters of the proposed deep recurrent learning architecture. BA-3+ has been tested on the sentiment classification task to classify symmetric and asymmetric distribution of the datasets from different domains, including Twitter, product reviews, and movie reviews. Comparative results have been obtained for advanced deep language models and Differential Evolution (DE) and Particle Swarm Optimization (PSO) algorithms. BA-3+ converged to the global minimum faster than the DE and PSO algorithms, and it outperformed the SGD, DE, and PSO algorithms for the Turkish and English datasets. The accuracy value and F1 measure have improved at least with a 30–40% improvement than the standard SGD algorithm for all classification datasets. Accuracy rates in the RNN model trained with BA-3+ ranged from 80% to 90%, while the RNN trained with SGD was able to achieve between 50% and 60% for most datasets. The performance of the RNN model with BA-3+ has as good as for Tree-LSTMs and Recursive Neural Tensor Networks (RNTNs) language models, which achieved accuracy results of up to 90% for some datasets. The improved accuracy and convergence results show that BA-3+ is an efficient, stable algorithm for the complex classification task, and it can handle the vanishing and exploding gradients problem of deep RNNs. View Full-Text
Keywords: bees algorithm; training deep neural networks; metaheuristics; opinion mining; recurrent neural networks; sentiment classification; natural language processing bees algorithm; training deep neural networks; metaheuristics; opinion mining; recurrent neural networks; sentiment classification; natural language processing
Show Figures

Figure 1

MDPI and ACS Style

Zeybek, S.; Pham, D.T.; Koç, E.; Seçer, A. An Improved Bees Algorithm for Training Deep Recurrent Networks for Sentiment Classification. Symmetry 2021, 13, 1347. https://doi.org/10.3390/sym13081347

AMA Style

Zeybek S, Pham DT, Koç E, Seçer A. An Improved Bees Algorithm for Training Deep Recurrent Networks for Sentiment Classification. Symmetry. 2021; 13(8):1347. https://doi.org/10.3390/sym13081347

Chicago/Turabian Style

Zeybek, Sultan, Duc T. Pham, Ebubekir Koç, and Aydın Seçer. 2021. "An Improved Bees Algorithm for Training Deep Recurrent Networks for Sentiment Classification" Symmetry 13, no. 8: 1347. https://doi.org/10.3390/sym13081347

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop