Machine Learning in Electronic and Biomedical Engineering, 4th Edition

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 June 2026 | Viewed by 3623

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Engineering-DII, Università Politecnica delle Marche, Via Brecce Bianche 12, I-60131 Ancona, Italy
Interests: embedded systems; machine learning; neural networks; pattern recognition; tensor learning; system identification; signal processing; image processing; speech recognition/synthesis; speaker identification; bio-signal analysis and classification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Engineering-DII, Università Politecnica delle Marche, Via Brecce Bianche 12, I-60131 Ancona, Italy
Interests: microelectronics; analog and mixed-signal integrated circuits; electronic device modeling; statistical IC design; machine learning signal processing; pattern recognition; bio-signal analysis and classification; system identification; neural networks; stochastic processes
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, machine learning techniques have proven to be extremely useful in a wide variety of applications and are now rapidly gaining increasing interest, both in electronics and biomedical engineering. The fast progress in artificial intelligence (AI) is currently opening new perspectives, not only for data analysis but also for the design, optimization, and deployment of intelligent systems across different domains.

This Special Issue seeks to collect contributions from researchers involved in developing and applying machine learning techniques in the following fields:

  • Embedded systems for artificial intelligence (AI) applications, with a focus on implementing algorithms directly in devices to reduce latency, communication costs, and privacy concerns;
  • Edge computing and TinyML, where the aim is to process AI algorithms locally on the device by focusing on compression techniques, dimensionality reduction, and parallel computation;
  • Wearable sensors for collecting biological data and physiological data;
  • Human activity recognition, diagnosis, and prognosis supported by advanced data-driven analysis of sensor and biomedical signals;
  • Intelligent decision-making systems and computer-aided diagnosis (CAD) tools for early detection, classification, and prediction of diseases;
  • Biomedical imaging and neuroimaging techniques (e.g., MRI, ultrasound imaging, CT) for disease diagnosis, progression monitoring, and treatment planning.

In addition to these consolidated areas, this Special Issue welcomes contributions in emerging AI paradigms that are expected to strongly impact both electronic and biomedical engineering:

  • Generative AI for electronic design automation, circuit and system optimization, signal synthesis, and device modeling, as well as for synthetic biomedical data generation, data augmentation, drug discovery, and medical image reconstruction;
  • Explainable AI and interpretable models to enhance reliability, accountability, and trust in safety-critical electronic systems such as autonomous devices, IoT infrastructures, and smart sensors, as well as in clinical decision-making and biomedical diagnostics;
  • Multimodal AI for integrating heterogeneous data from sensor networks, electronic devices, and industrial monitoring systems, as well as from biosignals, medical imaging, clinical records, and genomics for holistic patient assessment;
  • Federated and privacy-preserving learning for distributed learning across IoT devices, embedded systems, and smart electronics without sharing raw data, as well as for secure collaborative AI on sensitive biomedical information;
  • AI-driven optimization and personalization of electronic systems, such as adaptive circuits, reconfigurable architectures, and intelligent hardware design, as well as AI-driven personalized and precision medicine where predictive models are tailored to individual patients;
  • Digital twins of electronic circuits, devices, and industrial processes for design support, fault prediction, and lifecycle management, as well as digital twins of biological systems and healthcare processes for real-time monitoring, prediction, and optimization.

The aim of this Special Issue is to publish original research articles that cover recent advances in the theory and application of machine learning for electronic and biomedical engineering.

The topics of interest include, but are not limited to, the following:

  • Machine learning applications for embedded and edge systems;
  • Edge artificial intelligence (EdgeAI), tiny machine learning (TinyML), and low-power AI;
  • Machine learning for edge computation;
  • Deep learning model compression, acceleration and hardware-aware optimization;
  • Image classification, detection, semantic segmentation and object detection;
  • Machine learning for autonomous guide;
  • Machine learning for smart agriculture;
  • Machine learning for smart industry;
  • Deep learning and generative models for biomedical image and signal processing;
  • AI methods for computer-aided diagnosis and prognosis;
  • Machine-learning-based healthcare applications, such as sensor-based behavior analysis, human activity recognition, disease prediction, biomedical signal processing, and data monitoring;
  • Multimodal and federated learning;
  • Explainable AI and trustworthy machine learning;
  • Digital twins.

Dr. Laura Falaschetti
Prof. Claudio Turchetti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • neural networks
  • edge computing
  • edge artificial intelligence
  • tiny machine learning
  • generative AI
  • explainable AI
  • sensors for IoT
  • wearable devices
  • vision sensors
  • autonomous guide
  • smart agriculture
  • smart industry
  • medical image analysis
  • computer-aided diagnosis
  • human activity recognition
  • biosignals
  • digital twins
  • multimodal learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 1612 KB  
Article
Lightweight 1D-CNN-Based Battery State-of-Charge Estimation and Hardware Development
by Seungbum Kang, Yoonjae Lee, Gahyeon Jang and Seongsoo Lee
Electronics 2026, 15(3), 704; https://doi.org/10.3390/electronics15030704 - 6 Feb 2026
Viewed by 442
Abstract
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured [...] Read more.
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured pruning to reduce the number of parameters from 3121 to 358, representing an 88.5% reduction, without significant accuracy loss. Using quantization-aware training (QAT), the network is trained and executed in INT8, which reduces weight storage to one-quarter of the 32-bit baseline while maintaining high estimation accuracy with a Mean Absolute Error (MAE) of 0.0172. The hardware adopts a time-multiplexed single MAC architecture with FSM control, occupying 98,410 gates under a 28 nm process. Evaluations on an FPGA testbed with representative drive-cycle inputs show that the proposed INT8 pipeline achieves performance comparable to the floating-point reference with negligible precision drop, demonstrating its suitability for in-vehicle BMS deployment. Full article
Show Figures

Figure 1

12 pages, 633 KB  
Article
Optimized FreeMark Post-Training White-Box Watermarking of Tiny Neural Networks
by Riccardo Adorante, Tullio Facchinetti and Danilo Pietro Pau
Electronics 2025, 14(21), 4237; https://doi.org/10.3390/electronics14214237 - 29 Oct 2025
Cited by 1 | Viewed by 540
Abstract
Neural networks are powerful, high-accuracy systems whose trained parameters represent a valuable intellectual property. Building models that reach top level performance is a complex task and requires substantial investments of time and money so protecting these assets is an increasingly important task. Extensive [...] Read more.
Neural networks are powerful, high-accuracy systems whose trained parameters represent a valuable intellectual property. Building models that reach top level performance is a complex task and requires substantial investments of time and money so protecting these assets is an increasingly important task. Extensive research has been carried out on Neural Network Watermarking, exploring the possibility of inserting a recognizable marker in a host model either in the form of a concealed bit-string or as a characteristic output, making it possible to confirm network ownership even in the presence of malicious attempts at erasing the embedded marker from the model. The study examines the applicability of Opt-FreeMark, a non-invasive post-training white-box watermarking technique, obtained by modifying and optimizing an already existing state-of-the-art technique for tiny neural networks. Here, “Tiny” refers to models intended for ultra-low-power deployments, such as those running on edge devices like sensors and micro-controllers. Watermark robustness is also demonstrated by simulating common model-modification attacks that try to eliminate it from the model while preserving performance; the results presented in the paper indicate that the watermarking scheme effectively protects the networks against these manipulations. Full article
Show Figures

Figure 1

Review

Jump to: Research

36 pages, 2306 KB  
Review
The Global Importance of Machine Learning-Based Wearables and Digital Twins for Rehabilitation: A Review of Data Collection, Security, Edge Intelligence, Federated Learning, and Generative AI
by Maciej Piechowiak, Aleksander Goch, Ewelina Panas, Jolanta Masiak, Dariusz Mikołajewski, Izabela Rojek and Emilia Mikołajewska
Electronics 2025, 14(23), 4699; https://doi.org/10.3390/electronics14234699 - 28 Nov 2025
Cited by 2 | Viewed by 2280
Abstract
The convergence of wearable technologies and digital twin (DT) systems is transforming rehabilitation engineering, enabling continuous monitoring, personalized therapeutic interventions, and predictive modeling of patient recovery pathways. This review examines the growing role of machine learning (ML) in the development and integration of [...] Read more.
The convergence of wearable technologies and digital twin (DT) systems is transforming rehabilitation engineering, enabling continuous monitoring, personalized therapeutic interventions, and predictive modeling of patient recovery pathways. This review examines the growing role of machine learning (ML) in the development and integration of DTs frameworks in rehabilitation, with a focus on wearable sensor data, security and privacy, edge computing architectures, federated learning paradigms, and generative artificial intelligence (GenAI) applications. We first analyze data collection processes, emphasizing multimodal sensing, signal processing, and real-time synchronization between physical and virtual patient models. We then discuss key challenges related to data security, encryption, and privacy protection, especially in distributed clinical environments. The review then assesses the role of edge computing in reducing latency, improving energy efficiency, and enabling real-time local intelligence feedback in wearable devices. Federated learning approaches are discussed as promising strategies for jointly training ML models without compromising sensitive medical data. Finally, we present new GenAI techniques for generating synthetic data, personalizing digital twins, and simulating rehabilitation scenarios. By mapping current progress and identifying research gaps, this article provides a unified view that connects electronic and biomedical engineering with intelligent, secure, and adaptive DT ecosystems for next-generation rehabilitation solutions. Wearable devices with ML and DTs for rehabilitation are developing rapidly, but their current effectiveness still depends on consistent, high-quality data streams and robust clinical validation. The most promising convergence involves combining edge intelligence with federated learning to enable real-time personalization while preserving patient privacy. GenAI further enhances these systems by simulating patient-specific scenarios, accelerating model adaptation, and treatment planning. Key challenges remain related to standardizing data formats, ensuring comprehensive security, and seamlessly integrating these technologies into clinical processes. Full article
Show Figures

Figure 1

Back to TopTop