Next Article in Journal
The Potential of SoC FPAAs for Emerging Ultra-Low-Power Machine Learning
Previous Article in Journal
A Methodology to Design Static NCL Libraries
Previous Article in Special Issue
Embedded Object Detection with Custom LittleNet, FINN and Vitis AI DCNN Accelerators
 
 
Article

Implementing a Timing Error-Resilient and Energy-Efficient Near-Threshold Hardware Accelerator for Deep Neural Network Inference

Bridge Lab, Electrical and Computer Engineering, Utah State University, Logan, UT 84321, USA
*
Author to whom correspondence should be addressed.
Academic Editors: Aatmesh Shrivastava and Andrea Acquaviva
J. Low Power Electron. Appl. 2022, 12(2), 32; https://doi.org/10.3390/jlpea12020032
Received: 16 November 2021 / Revised: 19 April 2022 / Accepted: 23 May 2022 / Published: 6 June 2022
(This article belongs to the Special Issue Hardware for Machine Learning)
Increasing processing requirements in the Artificial Intelligence (AI) realm has led to the emergence of domain-specific architectures for Deep Neural Network (DNN) applications. Tensor Processing Unit (TPU), a DNN accelerator by Google, has emerged as a front runner outclassing its contemporaries, CPUs and GPUs, in performance by 15×–30×. TPUs have been deployed in Google data centers to cater to the performance demands. However, a TPU’s performance enhancement is accompanied by a mammoth power consumption. In the pursuit of lowering the energy utilization, this paper proposes PREDITOR—a low-power TPU operating in the Near-Threshold Computing (NTC) realm. PREDITOR uses mathematical analysis to mitigate the undetectable timing errors by boosting the voltage of the selective multiplier-and-accumulator units at specific intervals to enhance the performance of the NTC TPU, thereby ensuring a high inference accuracy at low voltage. PREDITOR offers up to 3×–5× improved performance in comparison to the leading-edge error mitigation schemes with a minor loss in accuracy. View Full-Text
Keywords: near-threshold computing; NTC; deep neural network; DNN; accelerators; timing error; AI; tensor processing unit; TPU; multiply and accumulate; MAC; energy efficiency near-threshold computing; NTC; deep neural network; DNN; accelerators; timing error; AI; tensor processing unit; TPU; multiply and accumulate; MAC; energy efficiency
Show Figures

Figure 1

MDPI and ACS Style

Gundi, N.D.; Pandey, P.; Roy, S.; Chakraborty, K. Implementing a Timing Error-Resilient and Energy-Efficient Near-Threshold Hardware Accelerator for Deep Neural Network Inference. J. Low Power Electron. Appl. 2022, 12, 32. https://doi.org/10.3390/jlpea12020032

AMA Style

Gundi ND, Pandey P, Roy S, Chakraborty K. Implementing a Timing Error-Resilient and Energy-Efficient Near-Threshold Hardware Accelerator for Deep Neural Network Inference. Journal of Low Power Electronics and Applications. 2022; 12(2):32. https://doi.org/10.3390/jlpea12020032

Chicago/Turabian Style

Gundi, Noel Daniel, Pramesh Pandey, Sanghamitra Roy, and Koushik Chakraborty. 2022. "Implementing a Timing Error-Resilient and Energy-Efficient Near-Threshold Hardware Accelerator for Deep Neural Network Inference" Journal of Low Power Electronics and Applications 12, no. 2: 32. https://doi.org/10.3390/jlpea12020032

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop