Next Article in Journal
Feasibility Study of Noninvasive Subcutaneous Imaging for Vein Localization
Previous Article in Journal
MELT: A Style-Adaptive Multimodal Folktale Generation Framework for Underrepresented Cultures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

TinySLFL: A Flash-Endurance-Aware Federated Edge Learning Framework with Layer-Wise Delayed Aggregation for Resource-Constrained Microcontrollers

School of Computer Science and Technology, Soochow University, Suzhou 215006, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(10), 2084; https://doi.org/10.3390/electronics15102084
Submission received: 15 April 2026 / Revised: 7 May 2026 / Accepted: 11 May 2026 / Published: 13 May 2026

Abstract

Federated edge learning on microcontrollers (MCUs) enables privacy-preserving adaptation, but on-device training faces a hardware tradeoff: fitting backpropagation into a limited static random-access memory (SRAM) often relies on on-chip flash as auxiliary storage, while repeated parameter persistence rapidly consumes finite program/erase (P/E) endurance. This paper proposes TinySLFL, a flash-endurance-aware federated learning framework for resource-constrained MCUs. On the client, layer-wise training bounds the peak SRAM usage to one layer, and delayed aggregation keeps intermediate updates in SRAM so that each communication round incurs only one flash persistence. On the server, dynamic aggregation combines loss-aware freezing with proxy-accuracy-guided filtering to improve the robustness under non-independently and identically distributed (Non-IID) data while suppressing unnecessary rounds. Experiments on CIFAR-10 and SVHN under a severe Dirichlet label skew and on a naturally heterogeneous FEMNIST showed, in a server-side simulation, that TinySLFL reduces the cumulative protocol-level erase-block operations (EOs) required to reach a common target accuracy by 97.8–98.6% relative to sequential layer training (SLT) and improves the mean Top-1 accuracy by up to 5.24 percentage points over the same ResNet-8 backbone in a five-seed evaluation. The power, latency, SRAM, and deployment feasibility were reported from actual ESP32-S3 measurements. These results demonstrate durable federated learning for extreme-edge MCUs.
Keywords: federated learning; on-device training; microcontroller; flash endurance federated learning; on-device training; microcontroller; flash endurance

Share and Cite

MDPI and ACS Style

Tao, Y.; Jia, J.; Deng, T. TinySLFL: A Flash-Endurance-Aware Federated Edge Learning Framework with Layer-Wise Delayed Aggregation for Resource-Constrained Microcontrollers. Electronics 2026, 15, 2084. https://doi.org/10.3390/electronics15102084

AMA Style

Tao Y, Jia J, Deng T. TinySLFL: A Flash-Endurance-Aware Federated Edge Learning Framework with Layer-Wise Delayed Aggregation for Resource-Constrained Microcontrollers. Electronics. 2026; 15(10):2084. https://doi.org/10.3390/electronics15102084

Chicago/Turabian Style

Tao, Yiru, Juncheng Jia, and Tao Deng. 2026. "TinySLFL: A Flash-Endurance-Aware Federated Edge Learning Framework with Layer-Wise Delayed Aggregation for Resource-Constrained Microcontrollers" Electronics 15, no. 10: 2084. https://doi.org/10.3390/electronics15102084

APA Style

Tao, Y., Jia, J., & Deng, T. (2026). TinySLFL: A Flash-Endurance-Aware Federated Edge Learning Framework with Layer-Wise Delayed Aggregation for Resource-Constrained Microcontrollers. Electronics, 15(10), 2084. https://doi.org/10.3390/electronics15102084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop