Next Article in Journal
0.13 μm CMOS Traveling-Wave Power Amplifier with On- and Off-Chip Gate-Line Termination
Next Article in Special Issue
Efficient Systolic-Array Redundancy Architecture for Offline/Online Repair
Previous Article in Journal
A Fast Transient Response Digital LDO with a TDC-Based Signal Converter
Previous Article in Special Issue
Area-Efficient Pipelined FFT Processor for Zero-Padded Signals
 
 
Article
Peer-Review Record

CENNA: Cost-Effective Neural Network Accelerator

Electronics 2020, 9(1), 134; https://doi.org/10.3390/electronics9010134
by Sang-Soo Park and Ki-Seok Chung *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Electronics 2020, 9(1), 134; https://doi.org/10.3390/electronics9010134
Submission received: 23 December 2019 / Revised: 6 January 2020 / Accepted: 8 January 2020 / Published: 10 January 2020
(This article belongs to the Special Issue Hardware and Architecture Ⅱ)

Round 1

Reviewer 1 Report

The paper describes Convolutional Neural Networks (CNN) which have a very wide application. CNNs require a large amount of computation. They use a lot of processing layers. Convolution layers are the most important layers which account for a major portion of computation and they require a memory access. To increase an efficiency of CNNs the amount of computation has to be reduced. Authors  propose new neural network accelerator. They called the accelerator as Cost-Effective Neural Network Accelerator.

The paper includes all parts of scientific paper: Introduction, Background and Related Works, Description of new method (algorithm), Implementation, Conclusions. The related works are enough. Results of tests are clearly presented. The results are presented in form of tables, graphs, figures. These forms are legible but not always intelligibly, e.g. Figures 3, 4 and 7. Figure 4 includes some mistakes. Reviewer couldn't find an element A21+A22 and in reviewer's opinion M2 should equal (A21+A22)*B11. Level third should include K elements and level fourth should include C elements. In addition capital and small letters are used to describe the same elements of matrix (Figure 3 and 4).

References aren't mentioned in sequence. 12th and 13th items are before items from 2 to 11.

Some editorial mistakes were pointed in the text of paper (attachment).

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The work "CENNA: Cost-Effective Neural Network Accelerator" presents a specialized IC architecture that can be of important use to accelerate the convolution operations in CNN algorithms. The manuscript is almost well-written, including the literature survey and detailed break-downs of their proposed techniques. I recommend only a few minor issues to be considered.

Major points:

A1, the function of the 2X2 unit is well explained. However, I recommend the authors comment a little more on scalability of the design. That is how the proposed design can adapt to an arbitrarily large matrix multiplications?

A2, if the power consumption of CENNA is truly lower than all existing hardware designs, the authors should refer to its particular applications in "edge computing".

Minor points:

B1, Line 67, extra wording "concludes this study".

B2, Fig. 2 and Fig. 6 are repetitive, I suggest keeping only one of them.

B3, There is no Fig. 5.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop