Next Article in Journal
Audio-Driven Facial Animation with Deep Learning: A Survey
Previous Article in Journal
On a Simplified Approach to Achieve Parallel Performance and Portability Across CPU and GPU Architectures
Previous Article in Special Issue
A Dynamic Event-Triggered Secure Monitoring and Control for a Class of Discrete-Time Markovian Jump Systems: A Plug-and-Play Architecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite

by
Fabrizio Maria Aymone
and
Danilo Pietro Pau
*
System Research and Applications, STMicroelectronics, Business Center Colleoni, Building Andromeda 3, at the 7th Floor, Via Cardano 20, 20864 Agrate Brianza, Italy
*
Author to whom correspondence should be addressed.
Information 2024, 15(11), 674; https://doi.org/10.3390/info15110674
Submission received: 28 August 2024 / Revised: 3 October 2024 / Accepted: 22 October 2024 / Published: 28 October 2024

Abstract

This paper proposes a new benchmark specifically designed for in-sensor digital machine learning computing to meet an ultra-low embedded memory requirement. With the exponential growth of edge devices, efficient local processing is essential to mitigate economic costs, latency, and privacy concerns associated with the centralized cloud processing. Emerging intelligent sensors equipped with computing assets to run neural network inferences and embedded in the same package, which hosts the sensing elements, present new challenges due to their limited memory resources and computational skills. This benchmark evaluates models trained with Quantization Aware Training (QAT) and compares their performance with Post-Training Quantization (PTQ) across three use cases: Human Activity Recognition (HAR) by means of the SHL dataset, Physical Activity Monitoring (PAM) by means of the PAMAP2 dataset, and superficial electromyography (sEMG) regression with the NINAPRO DB8 dataset. The results demonstrate the effectiveness of QAT over PTQ in most scenarios, highlighting the potential for deploying advanced AI models on highly resource-constrained sensors. The INT8 versions of the models always outperformed their FP32, regarding memory and latency reductions, except for the activations for CNN. The CNN model exhibited reduced memory usage and latency with respect to its Dense counterpart, allowing it to meet the stringent 8KiB data RAM and 32 KiB program RAM limits of the ISPU. The TCN model proved to be too large to fit within the memory constraints of the ISPU, primarily due to its greater capacity in terms of number of parameters, designed for processing more complex signals like EMG. This benchmark aims to guide the development of efficient AI solutions for In-Sensor Machine Learning Computing, fostering innovation in the field of Edge AI benchmarking, such as the one conducted by the MLCommons-Tiny working group.
Keywords: edge artificial intelligence; in-sensor machine learning computing; digital signal processing; intelligent signal processing unit; tiny sensors; MLCommons-Tiny working group edge artificial intelligence; in-sensor machine learning computing; digital signal processing; intelligent signal processing unit; tiny sensors; MLCommons-Tiny working group
Graphical Abstract

Share and Cite

MDPI and ACS Style

Aymone, F.M.; Pau, D.P. Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite. Information 2024, 15, 674. https://doi.org/10.3390/info15110674

AMA Style

Aymone FM, Pau DP. Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite. Information. 2024; 15(11):674. https://doi.org/10.3390/info15110674

Chicago/Turabian Style

Aymone, Fabrizio Maria, and Danilo Pietro Pau. 2024. "Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite" Information 15, no. 11: 674. https://doi.org/10.3390/info15110674

APA Style

Aymone, F. M., & Pau, D. P. (2024). Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite. Information, 15(11), 674. https://doi.org/10.3390/info15110674

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop