Advances in Tiny Machine Learning (TinyML): Applications, Models, and Implementation

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "AI Systems: Theory and Applications".

Deadline for manuscript submissions: 15 November 2025 | Viewed by 2022

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Bologna, 40126 Bologna, Italy
Interests: machine learning; human–computer interaction; tinyML; digital sustainability
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Tiny machine learning (TinyML) represents a paradigm shift in the field of machine learning, where models are deployed directly onto ultra-low-power, resource-constrained devices such as microcontrollers and edge sensors. Unlike traditional machine learning approaches that rely on centralized processing power, TinyML leverages the capabilities of edge devices to perform inference tasks locally, enabling real-time decision-making and autonomous functionality without constant reliance on cloud connectivity. This Special Issue, “Advances in Tiny Machine Learning (TinyML): Applications, Models, and Implementation”, focuses on exploring this burgeoning field with the aim of elucidating the latest advancements, challenges, and applications within this domain.

The topics covered in this Special Issue include, but are not limited to:

  • Innovative applications and use cases across various domains;
  • Hardware architecture optimization;
  • Interpreters and code generator frameworks;
  • Model compression;
  • Efficient training methods;
  • Federated learning approaches;
  • Energy-efficient inference methods;
  • Practical deployment strategies for TinyML devices;
  • Efficient communication protocols for TinyML;
  • Edge computing in TinyML applications;
  • Real-time processing in TinyML;
  • Integration of 5G/6G in TinyML applications;
  • Case studies of TinyML applications.

This Special Issue aims to serve as a platform for researchers, engineers, and practitioners to disseminate their cutting-edge research findings, exchange ideas, and foster collaborations in the field of TinyML. By showcasing state-of-the-art methodologies, practical implementations, and emerging trends, it endeavours to advance the understanding and adoption of TinyML, thereby catalysing its integration into diverse application domains.

Dr. Giovanni Delnevo
Prof. Dr. Pietro Manzoni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • TinyML
  • resource-constrained machine learning
  • federated learning
  • model distillation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 526 KiB  
Article
Resource-Efficient Clustered Federated Learning Framework for Industry 4.0 Edge Devices
by Atallo Kassaw Takele and Balázs Villányi
AI 2025, 6(2), 30; https://doi.org/10.3390/ai6020030 - 6 Feb 2025
Viewed by 930
Abstract
Industry 4.0 is an aggregate of recent technologies including artificial intelligence, big data, edge computing, and the Internet of Things (IoT) to enhance efficiency and real-time decision-making. Industry 4.0 data analytics demands a privacy-focused approach, and federated learning offers a viable solution for [...] Read more.
Industry 4.0 is an aggregate of recent technologies including artificial intelligence, big data, edge computing, and the Internet of Things (IoT) to enhance efficiency and real-time decision-making. Industry 4.0 data analytics demands a privacy-focused approach, and federated learning offers a viable solution for such scenarios. It allows each edge device to train the model locally using its own collected data and shares only the model updates with the server without the need to share real collected data. However, communication and computational costs for sharing model updates and performance are major bottlenecks for resource-constrained edge devices. This study introduces a representative-based parameter-sharing framework that aims to enhance the efficiency of federated learning in the Industry 4.0 environment. The framework begins with a server by distributing an initial model to edge devices, which then train it locally and send updated parameters back to the server for aggregation. To reduce communication and computational costs, the framework identifies groups of devices with similar parameter distributions and only sends updates from the resourceful and better-performing device, termed the cluster head, to the server. A backup cluster head is also elected to ensure reliability. Clustering is performed based on the device’s parameter distributions and data characteristics. Moreover, the server incorporates randomly selected past aggregated parameters into the current aggregation process through weighted averaging where more recent parameters are given greater weight to enhance model performance. Comparative experimental evaluation with the state of the art using a testbed dataset demonstrates promising results by minimizing computational cost while preserving prediction performance, which ultimately enhances data analytics on edge devices in industrial environments. Full article
Show Figures

Figure 1

Back to TopTop