Next Article in Journal
Laplacian Eigenmaps Dimensionality Reduction Based on Clustering-Adjusted Similarity
Previous Article in Journal
A Finite Regime Analysis of Information Set Decoding Algorithms
Open AccessArticle
Peer-Review Record

Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning

Algorithms 2019, 12(10), 206;
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Algorithms 2019, 12(10), 206;
Received: 4 July 2019 / Revised: 5 September 2019 / Accepted: 9 September 2019 / Published: 1 October 2019

Round 1

Reviewer 1 Report

1. Could you add more datasets for validation such as ImageNet datasets?

2. Suggest comparing the training approach with other state-of-the-art methods to further justify the effectiveness of the proposed approach.


Author Response

We appreciate the reviewer suggestions. 

The article has been reviewed by a native English expert. All the document was improved.

We have included experiments with CIFAR-10 that consists of 10 classes of natural images of low resolution. As future work as an extension of this work, the model will be scaled to more complex datasets with images of higher resolutions and more classes. We are testing the feasibility of a deep learning model that incrementally learns new data and classes without storing the original data. However, adjusting the Generative Model is very difficult.

The comparison with the methods of the state of the art are not comparable because in general they use memory models or re-use a percentage of the original data, which in our case corresponds to the rehearsal approach. Therefore, the results obtained by the literature, in general, are better to our approach where we incorporate pseudo-rehearsal and detection of novel classes.



Reviewer 2 Report

I enjoyed reading your paper! The problem of new learning in multi-class classification is one that is both interesting and a bit wicked.

I enjoyed your discussion of SIGANN, but I see no advantage (and no performance advantage) compared to iCaRL.  Please justify why you would want to (nearly) ignore what was previously learned.  If you have a speed advantage over other algorithms and only minimal differences in performance, then please report that based on your computer configuration.

Also, this is a new model.  Please discussion hyperparameter tuning (grid search, heuristics, etc.)  Be careful to specify what Beta1 (decay) and Beta2 mean in Adam.

Would you please compare actual results of your model against results produced by Rebuffi and Li at a minimum?  It is important to have multiple comparison tables. If you want users to consider SIGANN, then we need to see why we should do so.  Multi-class classification comparison metrics, speed, adaptability, etc.....

You used the balance class version of EMNIST.  What are the effects of this decision?  Please justify.  It makes sense intuitively that you would want to do so, but what happens when you intake imbalanced classes?

You state that your approach could be a step towards achieving a self-training neural network. But why would such a network decide not to retain what it previously learned?  Explain this in the discussion.

1.  English.  Please consider use of an editor.  I uploaded some pages for your review.

2.  Evidence.  There are some places (noted on your paper) where evidence is required.  Please provide the appropriate citations.


Overall, I think this will fly just fine if you can provide some (additional) justification of its true performance and importance.


Comments for author File: Comments.pdf

Author Response

We really appreciate all the observations and suggestion given by the reviewer.

We have sent the paper to be reviewed in its entirety by an English expert, so the written English was considerably improved.

We have inserted all the observations made by the reviewer with red text.

We are searching for a more biologically plausible model, where the data is not stored. In the literature, already exists several methods that use a rehearsal approach. Nevertheless, we are making a proposal where we combine both, a pseudo-rehearsal learning with a generative model and a novelty class detection. 

We have manually tune all the hyperparameters. Our computing capacity is limited, so we were not able to do a grid-search of the parameters.

We have tested our model with the CIFAR10 dataset. The results show that our proposed framework was able to gradually forget the information. However, further work is required in order to extend this framework to more complex datasets, where the generative model can be changed.

In the literature exists several models that use previous samples in a rehearsal learning setting, or they use a memory module. Our goal in this article is to make a framework that does not require to store previous samples and it is able to detect and adapt to novel samples. Our approach is more biologically-plausible.

In order to control the variables of the experiments, we have decided not to considered unbalance data. Further works are required to extend the framework to unbalanced data.

We have introduced the observations given by the reviewer in the manuscript.

Round 2

Reviewer 2 Report

The authors addressed my concerns.

Back to TopTop