Machine Learning in Electronic and Biomedical Engineering, 3rd Edition

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 31 August 2025 | Viewed by 14669

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Engineering—DII, Università Politecnica delle Marche, Via Brecce Bianche 12, I-60131 Ancona, Italy
Interests: embedded systems; machine learning; neural networks; pattern recognition; tensor learning; system identification; signal processing; image processing; speech recognition/synthesis; speaker identification; bio-signal analysis and classification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Engineering—DII, Università Politecnica delle Marche, Via Brecce Bianche 12, I-60131 Ancona, Italy
Interests: microelectronics; analog and mixed-signal integrated circuits; electronic device modeling; statistical IC design; machine learning signal processing; pattern recognition; bio-signal analysis and classification; system identification; neural networks; stochastic processes
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, machine learning techniques have proven to be extremely useful in a wide variety of applications, and they are now rapidly garnering increasing interest, both in electronic and biomedical engineering.

The Special Issue seeks to collect contributions from researchers involved in developing and using machine learning techniques applied to the following:

  • Embedded systems for artificial intelligence (AI) applications, in which the interest is focused on implementing these algorithms directly in the devices, thus reducing latency, communication costs, and privacy concerns;
  • Edge computing, where the aim is to process AI algorithms locally on the device, i.e., where the data are generated, by focusing on compression techniques, dimensionality reduction, and parallel computation;
  • Wearable sensors for collecting biological data;
  • Human activity detection as well as the diagnosis and prognosis of patients are based on the investigation of data collected from sensors;
  • Intelligent decision systems and automatic computer-aided diagnosis systems for the early detection and classification of diseases;
  • Neuroimaging techniques, such as magnetic resonance, ultrasound imaging, and computed tomography, to aid in the diagnosis and prediction of diseases.

The aim of this Special Issue is to publish original research articles that cover recent advances in the theory and application of machine learning for electronic and biomedical engineering.

The topics of interest include, but are not limited to, the following:

  • Machine learning applications for embedded systems;
  • Machine learning for edge computation;
  • Edge artificial intelligence and tiny machine learning;
  • Deep learning model compression and acceleration;
  • Image classification, detection, and semantic segmentation;
  • Machine learning for autonomous guide;
  • Machine learning for agriculture;
  • Machine learning for industry;
  • Deep neural networks for biomedical image processing;
  • Machine learning methods for computer-aided diagnosis;
  • Machine learning-based healthcare applications, such as sensor-based behavior analysis, human activity recognition, disease prediction, biomedical signal processing, and data monitoring.

Dr. Laura Falaschetti
Prof. Dr. Claudio Turchetti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • neural networks
  • edge computing
  • edge artificial intelligence
  • tiny machine learning
  • sensors for IoT
  • vision sensors
  • autonomous guide
  • medical image classification
  • computer-aided diagnosis
  • human activity recognition
  • biosignals

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 558 KiB  
Article
ADNA: Automating Application-Specific Integrated Circuit Development of Neural Network Accelerators
by David M. Lane and Ali Sahafi
Electronics 2025, 14(7), 1432; https://doi.org/10.3390/electronics14071432 - 2 Apr 2025
Viewed by 292
Abstract
Recently, multiple new technologies have emerged for automating the development of neural network (NN) accelerators for both field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). This paper explores methodologies for translating NN algorithms into chip layouts, with a focus on end-to-end automation, [...] Read more.
Recently, multiple new technologies have emerged for automating the development of neural network (NN) accelerators for both field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). This paper explores methodologies for translating NN algorithms into chip layouts, with a focus on end-to-end automation, cost-effectiveness, and open-source software. We present a robust framework for developing NN-to-silicon solutions and demonstrate a seamless plug-and-play automation flow using TensorFlow, Vivado HLS, HLS4ML, and Openlane2. SkyWater Technologies’ 130 nm PDK (Sky130) is employed to successfully generate layouts for two small NN examples under 1000 parameters, incorporating dense, activation, and 2D convolution layers. The results affirm that current open-source tools effectively automate low-complexity neural network architectures and deliver faster performance through FPGA structures. However, this improved performance comes at the cost of increased die area compared to bare-metal designs. While this showcases significant progress in accessible NN automation, achieving manufacturing-ready layouts for more complex NN architectures remains a challenge due to current tool limitations and heightened computational demands, which points to exciting opportunities for future advancements. Full article
Show Figures

Figure 1

18 pages, 3376 KiB  
Article
Heterogeneous Edge Computing for Molecular Property Prediction with Graph Convolutional Networks
by Mahdieh Grailoo and Jose Nunez-Yanez
Electronics 2025, 14(1), 101; https://doi.org/10.3390/electronics14010101 - 30 Dec 2024
Cited by 1 | Viewed by 811
Abstract
Graph-based neural networks have proven to be useful in molecular property prediction, a critical component of computer-aided drug discovery. In this application, in response to the growing demand for improved computational efficiency and localized edge processing, this paper introduces a novel approach that [...] Read more.
Graph-based neural networks have proven to be useful in molecular property prediction, a critical component of computer-aided drug discovery. In this application, in response to the growing demand for improved computational efficiency and localized edge processing, this paper introduces a novel approach that leverages specialized accelerators on a heterogeneous edge computing platform. Our focus is on graph convolutional networks, a leading graph-based neural network variant that integrates graph convolution layers with multi-layer perceptrons. Molecular graphs are typically characterized by a low number of nodes, leading to low-dimensional dense matrix multiplications within multi-layer perceptrons—conditions that are particularly well-suited for Edge TPUs. These TPUs feature a systolic array of multiply–accumulate units optimized for dense matrix operations. Furthermore, the inherent sparsity in molecular graph adjacency matrices offers additional opportunities for computational optimization. To capitalize on this, we developed an FPGA GFADES accelerator, using high-level synthesis, specifically tailored to efficiently manage the sparsity in both the graph structure and node features. Our hardware/software co-designed GCN+MLP architecture delivers performance improvements, achieving up to 58× increased speed compared to conventional software implementations. This architecture is implemented using the Pynq framework and TensorFlow Lite Runtime, running on a multi-core ARM CPU within an AMD/Xilinx Zynq Ultrascale+ device, in combination with the Edge TPU and programmable logic. Full article
Show Figures

Figure 1

20 pages, 2076 KiB  
Article
The Use of Machine Learning Models with Optuna in Disease Prediction
by Li-Hsing Lai, Ying-Lei Lin, Yu-Hui Liu, Jung-Pin Lai, Wen-Chieh Yang, Hung-Pin Hou and Ping-Feng Pai
Electronics 2024, 13(23), 4775; https://doi.org/10.3390/electronics13234775 - 3 Dec 2024
Viewed by 1903
Abstract
Effectively and equitably allocating medical resources, particularly for minority groups, is a critical issue that warrants further investigation in rural hospitals. Machine learning techniques have gained significant traction and demonstrated strong performance across various fields in recent years. The determination of hyperparameters significantly [...] Read more.
Effectively and equitably allocating medical resources, particularly for minority groups, is a critical issue that warrants further investigation in rural hospitals. Machine learning techniques have gained significant traction and demonstrated strong performance across various fields in recent years. The determination of hyperparameters significantly influences the performance of machine learning models. Thus, this study employs Optuna, a framework specifically designed for optimizing the hyperparameters of machine learning models. Building on prior research, machine learning models with Optuna (MLOPTA) are introduced to forecast diseases of indigenous patients. The numerical results reveal that the designed MLOPTA system can accurately capture the occurrences of specified diseases. Therefore, the MLOPTA system offers a promising approach for disease forecasting. The disease forecasting results can serve as crucial references for allocating hospital resources. Full article
Show Figures

Figure 1

15 pages, 2366 KiB  
Article
Gas Leakage Detection Using Tiny Machine Learning
by Majda El Barkani, Nabil Benamar, Hanae Talei and Miloud Bagaa
Electronics 2024, 13(23), 4768; https://doi.org/10.3390/electronics13234768 - 2 Dec 2024
Cited by 2 | Viewed by 2437
Abstract
Gas leakage detection is a critical concern in both industrial and residential settings, where real-time systems are essential for quickly identifying potential hazards and preventing dangerous incidents. Traditional detection systems often rely on centralized data processing, which can lead to delays and scalability [...] Read more.
Gas leakage detection is a critical concern in both industrial and residential settings, where real-time systems are essential for quickly identifying potential hazards and preventing dangerous incidents. Traditional detection systems often rely on centralized data processing, which can lead to delays and scalability issues. To overcome these limitations, in this study, we present a solution based on tiny machine learning (TinyML) to process data directly on devices. TinyML has the potential to execute machine learning algorithms locally, in real time, and using tiny devices, such as microcontrollers, ensuring faster and more efficient responses to potential dangers. Our approach combines an MLX90640 thermal camera with two optimized convolutional neural networks (CNNs), MobileNetV1 and EfficientNet-B0, deployed on the Arduino Nano 33 BLE Sense. The results show that our system not only provides real-time analytics but does so with high accuracy—88.92% for MobileNetV1 and 91.73% for EfficientNet-B0—while achieving inference times of 1414 milliseconds and using just 124.8 KB of memory. Compared to existing solutions, our edge-based system overcomes common challenges related to latency and scalability, making it a reliable, fast, and efficient option. This work demonstrates the potential for low-cost, scalable gas detection systems that can be deployed widely to enhance safety in various environments. By integrating cutting-edge machine learning models with affordable IoT devices, we aim to make safety more accessible, regardless of financial limitations, and pave the way for further innovation in environmental monitoring solutions. Full article
Show Figures

Figure 1

13 pages, 3285 KiB  
Article
Utilization of Machine Learning and Explainable Artificial Intelligence (XAI) for Fault Prediction and Diagnosis in Wafer Transfer Robot
by Jeong Eun Jeon, Sang Jeen Hong and Seung-Soo Han
Electronics 2024, 13(22), 4471; https://doi.org/10.3390/electronics13224471 - 14 Nov 2024
Viewed by 1035
Abstract
Faults in the wafer transfer robots (WTRs) used in semiconductor manufacturing processes can significantly affect productivity. This study defines high-risk components such as bearing motors, ball screws, timing belts, robot hands, and end effectors, and generates fault data for each component based on [...] Read more.
Faults in the wafer transfer robots (WTRs) used in semiconductor manufacturing processes can significantly affect productivity. This study defines high-risk components such as bearing motors, ball screws, timing belts, robot hands, and end effectors, and generates fault data for each component based on Fluke’s law. A stacking classifier was applied for fault prediction and severity classification, and logistic regression was used to identify fault components. Additionally, to analyze the frequency bands affecting each failed component and assess the severity of faults involving two mixed components, a hybrid explainable artificial intelligence (XAI) model combining Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) was employed to inform the user about the component causing the fault. This approach demonstrated a high prediction accuracy of 95%, and its integration into real-time monitoring systems is expected to reduce maintenance costs, decrease equipment downtime, and ultimately improve productivity. Full article
Show Figures

Figure 1

22 pages, 2149 KiB  
Article
Robust Biometric Verification Using Phonocardiogram Fingerprinting and a Multilayer-Perceptron-Based Classifier
by Roberta Avanzato, Francesco Beritelli and Salvatore Serrano
Electronics 2024, 13(22), 4377; https://doi.org/10.3390/electronics13224377 - 8 Nov 2024
Viewed by 906
Abstract
Recently, a new set of biometric traits, called medical biometrics, have been explored for human identity verification. This study introduces a novel framework for recognizing human identity through heart sound signals, commonly referred to as phonocardiograms (PCGs). The framework is built on extracting [...] Read more.
Recently, a new set of biometric traits, called medical biometrics, have been explored for human identity verification. This study introduces a novel framework for recognizing human identity through heart sound signals, commonly referred to as phonocardiograms (PCGs). The framework is built on extracting and suitably processing Mel-Frequency Cepstral Coefficients (MFCCs) from PCGs and on a classifier based on a Multilayer Perceptron (MLP) network. A large dataset containing heart sounds acquired from 206 people has been used to perform the experiments. The classifier was tuned to obtain the same false positive and false negative misclassification rates (equal error rate: EER = FPR = FNR) on chunks of audio lasting 2 s. This target has been reached, splitting the dataset into 70% and 30% training and testing non-overlapped subsets, respectively. A recurrence filter has been applied to also improve the performance of the system in the presence of noisy recordings. After the application of the filter on chunks of audio signal lasting from 2 to 22 s, the performance of the system has been evaluated in terms of recall, specificity, precision, negative predictive value, accuracy, and F1-score. All the performance metrics are higher than 97.86% with the recurrence filter applied on a window lasting 22 s and in different noise conditions. Full article
Show Figures

Figure 1

20 pages, 3901 KiB  
Article
Multi-Modal Fusion Network with Multi-Head Self-Attention for Injection Training Evaluation in Medical Education
by Zhe Li, Aya Kanazuka, Atsushi Hojo, Yukihiro Nomura and Toshiya Nakaguchi
Electronics 2024, 13(19), 3882; https://doi.org/10.3390/electronics13193882 - 30 Sep 2024
Cited by 1 | Viewed by 1170
Abstract
The COVID-19 pandemic has significantly disrupted traditional medical training, particularly in critical areas such as the injection process, which require expert supervision. To address the challenges posed by reduced face-to-face interactions, this study introduces a multi-modal fusion network designed to evaluate the timing [...] Read more.
The COVID-19 pandemic has significantly disrupted traditional medical training, particularly in critical areas such as the injection process, which require expert supervision. To address the challenges posed by reduced face-to-face interactions, this study introduces a multi-modal fusion network designed to evaluate the timing and motion aspects of the injection training process in medical education. The proposed framework integrates 3D reconstructed data and 2D images of hand movements during the injection process. The 3D data are preprocessed and encoded by a Long Short-Term Memory (LSTM) network to extract temporal features, while a Convolutional Neural Network (CNN) processes the 2D images to capture detailed image features. These encoded features are then fused and refined through a proposed multi-head self-attention module, which enhances the model’s ability to capture and weigh important temporal and image dynamics in the injection process. The final classification of the injection process is conducted by a classifier module. The model’s performance was rigorously evaluated using video data from 255 subjects with assessments made by professional physicians according to the Objective Structured Assessment of Technical Skill—Global Rating Score (OSATS-GRS)[B] criteria for time and motion evaluation. The experimental results demonstrate that the proposed data fusion model achieves an accuracy of 0.7238, an F1-score of 0.7060, a precision of 0.7339, a recall of 0.7238, and an AUC of 0.8343. These findings highlight the model’s potential as an effective tool for providing objective feedback in medical injection training, offering a scalable solution for the post-pandemic evolution of medical education. Full article
Show Figures

Figure 1

39 pages, 2364 KiB  
Article
The Iceberg Model for Integrated Aircraft Health Monitoring Based on AI, Blockchain, and Data Analytics
by Igor Kabashkin
Electronics 2024, 13(19), 3822; https://doi.org/10.3390/electronics13193822 - 27 Sep 2024
Cited by 3 | Viewed by 2103
Abstract
The increasing complexity of modern aircraft systems necessitates advanced monitoring solutions to ensure operational safety and efficiency. Traditional aircraft health monitoring systems (AHMS) often rely on reactive maintenance strategies, detecting only visible faults while leaving underlying issues unaddressed. This gap can lead to [...] Read more.
The increasing complexity of modern aircraft systems necessitates advanced monitoring solutions to ensure operational safety and efficiency. Traditional aircraft health monitoring systems (AHMS) often rely on reactive maintenance strategies, detecting only visible faults while leaving underlying issues unaddressed. This gap can lead to critical failures and unplanned downtime, resulting in significant operational costs. To address this issue, this paper proposes the integration of artificial intelligence (AI) and blockchain technologies within an enhanced AHMS, utilizing the iceberg model as a conceptual framework to illustrate both visible and hidden defects. The model highlights the importance of detecting and addressing issues at the earliest possible stages, ensuring that hidden defects are identified and mitigated before they evolve into significant failures. The rationale behind this approach lies in the need for a predictive maintenance system capable of identifying and mitigating hidden risks before they escalate. Key tasks completed in this study include: a comparative analysis of the proposed system with existing monitoring solutions, the selection of AI algorithms for fault prediction, and the development of a blockchain-based infrastructure for secure, transparent data sharing. The evolution of AHMS is discussed, emphasizing the shift from traditional monitoring to advanced, predictive, and prescriptive maintenance approaches. This integrated approach demonstrates the potential to significantly improve fault detection, optimize maintenance schedules, and enhance data security across the aviation industry. Full article
Show Figures

Figure 1

11 pages, 2565 KiB  
Article
Improved Plasma Etch Endpoint Detection Using Attention-Based Long Short-Term Memory Machine Learning
by Ye Jin Kim, Jung Ho Song, Ki Hwan Cho, Jong Hyeon Shin, Jong Sik Kim, Jung Sik Yoon and Sang Jeen Hong
Electronics 2024, 13(17), 3577; https://doi.org/10.3390/electronics13173577 - 9 Sep 2024
Viewed by 1648
Abstract
Existing etch endpoint detection (EPD) methods, primarily based on single wavelengths, have limitations, such as low signal-to-noise ratios and the inability to consider the long-term dependencies of time series data. To address these issues, this study proposes a context of time series data [...] Read more.
Existing etch endpoint detection (EPD) methods, primarily based on single wavelengths, have limitations, such as low signal-to-noise ratios and the inability to consider the long-term dependencies of time series data. To address these issues, this study proposes a context of time series data using long short-term memory (LSTM), a kind of recurrent neural network (RNN). The proposed method is based on the time series data collected through optical emission spectroscopy (OES) data during the SiO2 etching process. After training the LSTM model, the proposed method demonstrated the ability to detect the etch endpoint more accurately than existing methods by considering the entire time series. The LSTM model achieved an accuracy of 97.1% in a given condition, which shows that considering the flow and context of time series data can significantly reduce the false detection rate. To improve the performance of the proposed LSTM model, we created an attention-based LSTM model and confirmed that the model accuracy is 98.2%, and the performance is improved compared to that of the existing LSTM model. Full article
Show Figures

Figure 1

Review

Jump to: Research

40 pages, 1084 KiB  
Review
Applications of Machine Learning in Cancer Imaging: A Review of Diagnostic Methods for Six Major Cancer Types
by Andreea Ionela Dumachi and Cătălin Buiu
Electronics 2024, 13(23), 4697; https://doi.org/10.3390/electronics13234697 - 27 Nov 2024
Cited by 1 | Viewed by 2100
Abstract
Machine learning (ML) methods have revolutionized cancer analysis by enhancing the accuracy of diagnosis, prognosis, and treatment strategies. This paper presents an extensive study on the applications of machine learning in cancer analysis, with a focus on three primary areas: a comparative analysis [...] Read more.
Machine learning (ML) methods have revolutionized cancer analysis by enhancing the accuracy of diagnosis, prognosis, and treatment strategies. This paper presents an extensive study on the applications of machine learning in cancer analysis, with a focus on three primary areas: a comparative analysis of medical imaging techniques (including X-rays, mammography, ultrasound, CT, MRI, and PET), various AI and ML techniques (such as deep learning, transfer learning, and ensemble learning), and the challenges and limitations associated with utilizing ML in cancer analysis. The study highlights the potential of ML to improve early detection and patient outcomes while also addressing the technical and practical challenges that must be overcome for its effective clinical integration. Finally, the paper discusses future directions and opportunities for advancing ML applications in cancer research. Full article
Show Figures

Figure 1

Back to TopTop