Next Article in Journal
Chinese-Uyghur Bilingual Lexicon Extraction Based on Weak Supervision
Next Article in Special Issue
Earthquake Detection at the Edge: IoT Crowdsensing Network
Previous Article in Journal
Num-Symbolic Homophonic Social Net-Words
Previous Article in Special Issue
File System Support for Privacy-Preserving Analysis and Forensics in Low-Bandwidth Edge Environments
 
 
Article
Peer-Review Record

Shrink and Eliminate: A Study of Post-Training Quantization and Repeated Operations Elimination in RNN Models

Information 2022, 13(4), 176; https://doi.org/10.3390/info13040176
by Nesma M. Rezk 1,*, Tomas Nordström 2 and Zain Ul-Abdin 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Information 2022, 13(4), 176; https://doi.org/10.3390/info13040176
Submission received: 28 February 2022 / Revised: 25 March 2022 / Accepted: 28 March 2022 / Published: 31 March 2022
(This article belongs to the Special Issue Artificial Intelligence on the Edge)

Round 1

Reviewer 1 Report

In this manuscript, Rezk et. al. introduced their study of post-training quantization and repeated operations elimination in RNN models. The analyses are solid and the manuscript is well written. I would recommend to accept this manuscript for publication. However, there is only 1 question that I have and remain to be answered by the authors.

  1. TIMIT dataset was used as benchmark to compare different RNN models. What if a different dataset was used? Would the results be the same? Need to justify.

 

Author Response

Dear reviewer,

Thanks a lot for reviewing my paper.

My answer to your question about the effect of changing the dataset on the study is:

Models that are more sensitive to quantization or delta methods will stay being more sensitive. Models that have a higher percentage of Eliminated Operations will keep this property. However, the increase of the error resulting from applying different quantization/approximation configurations on the models will change. In addition, the percentage of Eliminated Operations (EO) may change as it also depends on the similarity in the dataset sequences. 

 

Thanks again,

Nesma

 

 

Reviewer 2 Report

An interesting paper testing RNNs model optimization in order to reduce the complexity of the used RNN architecture, the required memory usage and training time of the RNN without a loss on the accuracy of the implemented RNN.  Four different RNN models based on LSTM, GRU, LiGRU, and SRU units are investigated for speech recognition on the TIMIT dataset, from a compression point of view using quantization combined with the delta networks method. A detailed study with useful conclusions for optimal implementation of RNN architectures on edge devices.

Author Response

Dear reviewer, 

Thanks a lot for reviewing my paper.

 

Regards,

Nesma

 

Back to TopTop