Next Article in Journal
0.13 μm CMOS Traveling-Wave Power Amplifier with On- and Off-Chip Gate-Line Termination
Previous Article in Journal
A Fast Transient Response Digital LDO with a TDC-Based Signal Converter
Previous Article in Special Issue
Area-Efficient Pipelined FFT Processor for Zero-Padded Signals
Open AccessArticle

CENNA: Cost-Effective Neural Network Accelerator

Department of Electronics and Computer Engineering, Hanyang University, Seoul 04736, Korea
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(1), 134; https://doi.org/10.3390/electronics9010134
Received: 23 December 2019 / Revised: 6 January 2020 / Accepted: 8 January 2020 / Published: 10 January 2020
(This article belongs to the Special Issue Hardware and Architecture Ⅱ)
Convolutional neural networks (CNNs) are widely adopted in various applications. State-of-the-art CNN models deliver excellent classification performance, but they require a large amount of computation and data exchange because they typically employ many processing layers. Among these processing layers, convolution layers, which carry out many multiplications and additions, account for a major portion of computation and memory access. Therefore, reducing the amount of computation and memory access is the key for high-performance CNNs. In this study, we propose a cost-effective neural network accelerator, named CENNA, whose hardware cost is reduced by employing a cost-centric matrix multiplication that employs both Strassen’s multiplication and a naïve multiplication. Furthermore, the convolution method using the proposed matrix multiplication can minimize data movement by reusing both the feature map and the convolution kernel without any additional control logic. In terms of throughput, power consumption, and silicon area, the efficiency of CENNA is up to 88 times higher than that of conventional designs for the CNN inference. View Full-Text
Keywords: convolutional neural network (CNN); neural network accelerator; neural processing unit (NPU); CNN inference convolutional neural network (CNN); neural network accelerator; neural processing unit (NPU); CNN inference
Show Figures

Figure 1

MDPI and ACS Style

Park, S.-S.; Chung, K.-S. CENNA: Cost-Effective Neural Network Accelerator. Electronics 2020, 9, 134.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop