You are currently viewing a new version of our website. To view the old version click .
AI
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

12 December 2025

Online On-Device Adaptation of Linguistic Fuzzy Models for TinyML Systems

,
,
and
1
Centre for Advanced Studies in Physics, Mathematics and Computing (CEAFMC), University of Huelva, Huelva 21007, Spain
2
Department of Information Technologies, University of Huelva, Huelva 21007, Spain
3
Andalusian Institute of Data Science and Computational Intelligence (DaSCI), University of Huelva, Huelva 21007, Spain
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Advances in Tiny Machine Learning (TinyML): Applications, Models, and Implementation

Abstract

Background: Many everyday electronic devices incorporate embedded computers, allowing them to offer advanced functions such as Internet connectivity or the execution of artificial intelligence algorithms, giving rise to Tiny Machine Learning (TinyML) and Edge AI applications. In these contexts, models must be both efficient and explainable, especially when they are intended for systems that must be understood, interpreted, validated, or certified by humans in contrast to other approaches that are less interpretable. Among these algorithms, linguistic fuzzy systems have traditionally been valued for their interpretability and their ability to represent uncertainty with low computational cost, making them a relevant choice for embedded intelligence. However, in dynamic and changing environments, it is essential that these models can continuously adapt. While there are fuzzy approaches capable of adapting to changing conditions, few studies explicitly address their adaptation and optimization in resource-constrained devices. Methods: This paper focuses on this challenge and presents a lightweight evolutionary strategy, based on a micro genetic algorithm, adapted for constrained hardware online on-device tuning of linguistic (Mamdani-type) fuzzy models, while preserving their interpretability. Results: A prototype implementation on an embedded platform demonstrates the feasibility of the approach and highlights its potential to bring explainable self-adaptation to TinyML and Edge AI scenarios. Conclusions: The main contribution lies in showing how an appropriate integration of carefully chosen tuning mechanisms and model structure enables efficient on-device adaptation under severe resource constraints, making continuous linguistic adjustment feasible within TinyML systems.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.