Next Article in Journal
Liver Tumor Segmentation in CT Scans Using Modified SegNet
Next Article in Special Issue
Embedded Bio-Mimetic System for Functional Electrical Stimulation Controlled by Event-Driven sEMG
Previous Article in Journal
Prototype of Nitro Compound Vapor and Trace Detector Based on a Capacitive MIS Sensor
Previous Article in Special Issue
Steerable-Discrete-Cosine-Transform (SDCT): Hardware Implementation and Performance Analysis
 
 
Article

Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic

1
Department of Information Engineering, Università di Pisa, Via Girolamo Caruso, 16, 56122 Pisa PI, Italy
2
Medical Microinstruments (MMI) S.p.A., Via Sterpulino, 3, 56121 Pisa PI, Italy
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(5), 1515; https://doi.org/10.3390/s20051515
Received: 25 January 2020 / Revised: 29 February 2020 / Accepted: 2 March 2020 / Published: 10 March 2020
With increasing real-time constraints being put on the use of Deep Neural Networks (DNNs) by real-time scenarios, there is the need to review information representation. A very challenging path is to employ an encoding that allows a fast processing and hardware-friendly representation of information. Among the proposed alternatives to the IEEE 754 standard regarding floating point representation of real numbers, the recently introduced Posit format has been theoretically proven to be really promising in satisfying the mentioned requirements. However, with the absence of proper hardware support for this novel type, this evaluation can be conducted only through a software emulation. While waiting for the widespread availability of the Posit Processing Units (the equivalent of the Floating Point Unit (FPU)), we can already exploit the Posit representation and the currently available Arithmetic-Logic Unit (ALU) to speed up DNNs by manipulating the low-level bit string representations of Posits. As a first step, in this paper, we present new arithmetic properties of the Posit number system with a focus on the configuration with 0 exponent bits. In particular, we propose a new class of Posit operators called L1 operators, which consists of fast and approximated versions of existing arithmetic operations or functions (e.g., hyperbolic tangent (TANH) and extended linear unit (ELU)) only using integer arithmetic. These operators introduce very interesting properties and results: (i) faster evaluation than the exact counterpart with a negligible accuracy degradation; (ii) an efficient ALU emulation of a number of Posits operations; and (iii) the possibility to vectorize operations in Posits, using existing ALU vectorized operations (such as the scalable vector extension of ARM CPUs or advanced vector extensions on Intel CPUs). As a second step, we test the proposed activation function on Posit-based DNNs, showing how 16-bit down to 10-bit Posits represent an exact replacement for 32-bit floats while 8-bit Posits could be an interesting alternative to 32-bit floats since their performances are a bit lower but their high speed and low storage properties are very appealing (leading to a lower bandwidth demand and more cache-friendly code). Finally, we point out how small Posits (i.e., up to 14 bits long) are very interesting while PPUs become widespread, since Posit operations can be tabulated in a very efficient way (see details in the text). View Full-Text
Keywords: alternative representations to float numbers; posit arithmetic; Deep Neural Networks (DNNs); neural network activation functions alternative representations to float numbers; posit arithmetic; Deep Neural Networks (DNNs); neural network activation functions
Show Figures

Figure 1

MDPI and ACS Style

Cococcioni, M.; Rossi, F.; Ruffaldi, E.; Saponara, S. Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic. Sensors 2020, 20, 1515. https://doi.org/10.3390/s20051515

AMA Style

Cococcioni M, Rossi F, Ruffaldi E, Saponara S. Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic. Sensors. 2020; 20(5):1515. https://doi.org/10.3390/s20051515

Chicago/Turabian Style

Cococcioni, Marco, Federico Rossi, Emanuele Ruffaldi, and Sergio Saponara. 2020. "Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic" Sensors 20, no. 5: 1515. https://doi.org/10.3390/s20051515

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop