Next Article in Journal
Laplacian Eigenmaps Dimensionality Reduction Based on Clustering-Adjusted Similarity
Previous Article in Journal
A Finite Regime Analysis of Information Set Decoding Algorithms
Open AccessArticle

Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning

1
Escuela de Ingeniería C. Biomédica, Universidad de Valaraíso, Valparaíso 2362905, Chile
2
Centro de Investigación y Desarrollo en Ingeniería en Salud, CINGS-UV, Universidad de Valparaíso, Valparaíso 2362905, Chile
3
Engineering Faculty, Universidad Andres Bello, Viña del Mar 2531015, Chile
*
Authors to whom correspondence should be addressed.
Algorithms 2019, 12(10), 206; https://doi.org/10.3390/a12100206
Received: 4 July 2019 / Revised: 5 September 2019 / Accepted: 9 September 2019 / Published: 1 October 2019
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43 % and, therefore, proceed with incremental class learning. View Full-Text
Keywords: artificial neural networks; deep learning; generative neural networks; incremental learning; novelty detection; catastrophic interference artificial neural networks; deep learning; generative neural networks; incremental learning; novelty detection; catastrophic interference
Show Figures

Figure 1

MDPI and ACS Style

Mellado, D.; Saavedra, C.; Chabert, S.; Torres, R.; Salas, R. Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning. Algorithms 2019, 12, 206.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop