Advances in Tiny Machine Learning (TinyML): Applications, Models, and Implementation

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "AI Systems: Theory and Applications".

Deadline for manuscript submissions: closed (15 November 2025) | Viewed by 9102

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Bologna, 40126 Bologna, Italy
Interests: machine learning; human–computer interaction; tinyML; digital sustainability
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Tiny machine learning (TinyML) represents a paradigm shift in the field of machine learning, where models are deployed directly onto ultra-low-power, resource-constrained devices such as microcontrollers and edge sensors. Unlike traditional machine learning approaches that rely on centralized processing power, TinyML leverages the capabilities of edge devices to perform inference tasks locally, enabling real-time decision-making and autonomous functionality without constant reliance on cloud connectivity. This Special Issue, “Advances in Tiny Machine Learning (TinyML): Applications, Models, and Implementation”, focuses on exploring this burgeoning field with the aim of elucidating the latest advancements, challenges, and applications within this domain.

The topics covered in this Special Issue include, but are not limited to:

  • Innovative applications and use cases across various domains;
  • Hardware architecture optimization;
  • Interpreters and code generator frameworks;
  • Model compression;
  • Efficient training methods;
  • Federated learning approaches;
  • Energy-efficient inference methods;
  • Practical deployment strategies for TinyML devices;
  • Efficient communication protocols for TinyML;
  • Edge computing in TinyML applications;
  • Real-time processing in TinyML;
  • Integration of 5G/6G in TinyML applications;
  • Case studies of TinyML applications.

This Special Issue aims to serve as a platform for researchers, engineers, and practitioners to disseminate their cutting-edge research findings, exchange ideas, and foster collaborations in the field of TinyML. By showcasing state-of-the-art methodologies, practical implementations, and emerging trends, it endeavours to advance the understanding and adoption of TinyML, thereby catalysing its integration into diverse application domains.

Dr. Giovanni Delnevo
Prof. Dr. Pietro Manzoni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • TinyML
  • resource-constrained machine learning
  • federated learning
  • model distillation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

34 pages, 5349 KB  
Article
Online On-Device Adaptation of Linguistic Fuzzy Models for TinyML Systems
by Javier Martín-Moreno, Francisco A. Márquez, Ana M. Roldán and Antonio Peregrín
AI 2025, 6(12), 325; https://doi.org/10.3390/ai6120325 - 12 Dec 2025
Viewed by 1203
Abstract
Background: Many everyday electronic devices incorporate embedded computers, allowing them to offer advanced functions such as Internet connectivity or the execution of artificial intelligence algorithms, giving rise to Tiny Machine Learning (TinyML) and Edge AI applications. In these contexts, models must be both [...] Read more.
Background: Many everyday electronic devices incorporate embedded computers, allowing them to offer advanced functions such as Internet connectivity or the execution of artificial intelligence algorithms, giving rise to Tiny Machine Learning (TinyML) and Edge AI applications. In these contexts, models must be both efficient and explainable, especially when they are intended for systems that must be understood, interpreted, validated, or certified by humans in contrast to other approaches that are less interpretable. Among these algorithms, linguistic fuzzy systems have traditionally been valued for their interpretability and their ability to represent uncertainty with low computational cost, making them a relevant choice for embedded intelligence. However, in dynamic and changing environments, it is essential that these models can continuously adapt. While there are fuzzy approaches capable of adapting to changing conditions, few studies explicitly address their adaptation and optimization in resource-constrained devices. Methods: This paper focuses on this challenge and presents a lightweight evolutionary strategy, based on a micro genetic algorithm, adapted for constrained hardware online on-device tuning of linguistic (Mamdani-type) fuzzy models, while preserving their interpretability. Results: A prototype implementation on an embedded platform demonstrates the feasibility of the approach and highlights its potential to bring explainable self-adaptation to TinyML and Edge AI scenarios. Conclusions: The main contribution lies in showing how an appropriate integration of carefully chosen tuning mechanisms and model structure enables efficient on-device adaptation under severe resource constraints, making continuous linguistic adjustment feasible within TinyML systems. Full article
Show Figures

Figure 1

14 pages, 7768 KB  
Article
On the Deployment of Edge AI Models for Surface Electromyography-Based Hand Gesture Recognition
by Andres Gomez-Bautista, Diego Mendez, Catalina Alvarado-Rojas, Ivan F. Mondragon and Julian D. Colorado
AI 2025, 6(6), 107; https://doi.org/10.3390/ai6060107 - 22 May 2025
Cited by 3 | Viewed by 2487
Abstract
Background: Robotic-based therapy has emerged as a prominent treatment modality for the rehabilitation of hand function impairment resulting from strokes. Aim: In this context, feature engineering becomes particularly important to estimate the intention of upper limb movements by utilizing machine learning models, especially [...] Read more.
Background: Robotic-based therapy has emerged as a prominent treatment modality for the rehabilitation of hand function impairment resulting from strokes. Aim: In this context, feature engineering becomes particularly important to estimate the intention of upper limb movements by utilizing machine learning models, especially when a hardware embedded-on-board implementation is expected, due to the strong computational, energy, and latency constraints. Methods: The present study details the implementation of four cutting-edge feature engineering techniques (random forest, minimum redundancy maximum relevance (MRMR), Davies–Bouldin index, and t-tests) in the context of machine learning algorithms (neuronal networks and bagged forests) deployed within a resource-constrained autonomous embedded system. Results: The findings of this study demonstrate that by assigning relative importance to features and removing redundant or superfluous information, it is possible to enhance the system’s execution by up to 31% while preserving the model’s performance at a comparable level. Conclusions: This work proves the usefulness of TinyML as an approach to properly integrate AI into constrained edge embedded systems to support complex strategies such as the proposed hand gesture recognition for the smart rehabilitation of post-stroke patients. Full article
Show Figures

Figure 1

16 pages, 526 KB  
Article
Resource-Efficient Clustered Federated Learning Framework for Industry 4.0 Edge Devices
by Atallo Kassaw Takele and Balázs Villányi
AI 2025, 6(2), 30; https://doi.org/10.3390/ai6020030 - 6 Feb 2025
Cited by 7 | Viewed by 2937
Abstract
Industry 4.0 is an aggregate of recent technologies including artificial intelligence, big data, edge computing, and the Internet of Things (IoT) to enhance efficiency and real-time decision-making. Industry 4.0 data analytics demands a privacy-focused approach, and federated learning offers a viable solution for [...] Read more.
Industry 4.0 is an aggregate of recent technologies including artificial intelligence, big data, edge computing, and the Internet of Things (IoT) to enhance efficiency and real-time decision-making. Industry 4.0 data analytics demands a privacy-focused approach, and federated learning offers a viable solution for such scenarios. It allows each edge device to train the model locally using its own collected data and shares only the model updates with the server without the need to share real collected data. However, communication and computational costs for sharing model updates and performance are major bottlenecks for resource-constrained edge devices. This study introduces a representative-based parameter-sharing framework that aims to enhance the efficiency of federated learning in the Industry 4.0 environment. The framework begins with a server by distributing an initial model to edge devices, which then train it locally and send updated parameters back to the server for aggregation. To reduce communication and computational costs, the framework identifies groups of devices with similar parameter distributions and only sends updates from the resourceful and better-performing device, termed the cluster head, to the server. A backup cluster head is also elected to ensure reliability. Clustering is performed based on the device’s parameter distributions and data characteristics. Moreover, the server incorporates randomly selected past aggregated parameters into the current aggregation process through weighted averaging where more recent parameters are given greater weight to enhance model performance. Comparative experimental evaluation with the state of the art using a testbed dataset demonstrates promising results by minimizing computational cost while preserving prediction performance, which ultimately enhances data analytics on edge devices in industrial environments. Full article
Show Figures

Figure 1

Back to TopTop