HW-ADAM: FPGA-Based Accelerator for Adaptive Moment Estimation
Round 1
Reviewer 1 Report
Strong aspects:
The paper proposes two improvements for the ADAM platform based on hardware approach with very good results in processing speed and resources consumption.
Comments to the authors:
The percentage of improvement for HW resources should be given for F-ADAM as for E-ADAM in the abstract.
Author Response
Thank you for the review! We have added the percentage of improvement to the abstract, as you can find in the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
The authors present ADAM algorithm used in the training process of neural networks. I consider the paper well written an I also appreciate the presented methodology in the deployment of ADAM: design, hardware implementation, accuracy validation, functionality validation and efficiency validation. I think the paper has merits to be published, I recommend the acceptance of the paper.
Unfortunately, I cannot contribute with useful observation to the paper as the time to review it was quite short, it was enough just to assess the originality and the quality of presentation.
Author Response
Thank you for the reviews!
Reviewer 3 Report
1. Are the Efficient-ADAM and Fast-ADAM complementary or independent algorithms? If they are complementary then the comparisons in Table 1 and Table 2 are not fair (the sum of E-ADAM and F-ADAM hardware resources should be compared with hardware resources of the original ADAM algorithm).
2. It would be beneficial to present results of ASIC implementations of these two algorithms (it is difficult to achieve a full optimization at FPGA).
Author Response
Thank you so much for the review. Please see the attachment.
Author Response File: Author Response.pdf