Artificial Intelligence in Biomedical Engineering: Challenges and Developments

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "Medical & Healthcare AI".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 14113

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical and Computing Engineering, National Technical University of Athens, 15780 Athens, Greece
Interests: transmission of nerve stimuli; study of cognitive systems and processes; medical image and signal processing; AI for diagnosis and therapy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue titled “Artificial Intelligence in Biomedical Engineering: Challenges and Developments” explores the integration of AI technologies into the field of biomedical engineering. With a focus on applications such as medical image analysis, disease diagnosis, and personalized medicine, this issue provides a platform for researchers to showcase recent advancements and address current challenges. By fostering interdisciplinary dialogue and collaboration, it aims to accelerate innovation in healthcare and contribute to the ongoing evolution of AI-driven biomedical engineering.

Focus: The focus of the Special Issue “Artificial Intelligence in Biomedical Engineering: Challenges and Developments” is to explore the intersection of artificial intelligence (AI) and biomedical engineering. This involves understanding how AI technologies can be applied in the field of biomedical engineering to address various challenges and foster development.

Scope: The scope encompasses a wide range of topics within the realm of AI in biomedical engineering. This may include, but is not limited to, the following:

  • Application of machine learning and deep learning algorithms in medical image analysis.
  • AI-driven approaches for disease diagnosis and prognosis.
  • Utilization of AI techniques in healthcare data analytics and personalized medicine.
  • Development of AI-based medical devices and systems.
  • Ethical considerations and societal impacts of AI adoption in healthcare.
  • Explainability in medical deep learning approaches.

Purpose: The purpose of this Special Issue is to provide a platform for researchers, practitioners, and experts in both AI and biomedical engineering to share their insights, experiences, and latest research findings. By doing so, this Special Issue aims to:

  • Highlight the current challenges and opportunities in applying AI to biomedical engineering problems.
  • Showcase recent developments, innovations, and breakthroughs in the field.
  • Foster collaboration and interdisciplinary exchange between researchers in AI and biomedical engineering.
  • Stimulate further research and advancements in this rapidly evolving domain.

This Special Issue will supplement existing literature on AI in biomedical engineering in several ways:

  • Comprehensive Coverage: By addressing a wide range of topics, this Special Issue will provide a comprehensive overview of the latest advancements and challenges in the field, filling potential gaps in existing literature.
  • Cutting-Edge Research: It will feature original research articles, reviews, and case studies that present novel approaches, methodologies, and applications of AI in biomedical engineering, contributing new insights to the existing body of knowledge.
  • Interdisciplinary Perspective: As AI in biomedical engineering requires expertise from both AI and biomedical engineering domains, this Special Issue will facilitate interdisciplinary dialogue and collaboration, bridging the gap between these two fields.
  • Emerging Trends: By focusing on recent developments and emerging trends, this Special Issue will keep readers abreast of the latest advancements and technological innovations in AI-driven healthcare, supplementing the existing literature with up-to-date information.

Dr. Ioannis Kakkos
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • biomedical engineering
  • medical image analysis
  • disease diagnosis
  • personalized medicine
  • machine learning
  • deep learning
  • healthcare data analytics
  • medical devices

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

38 pages, 7211 KiB  
Article
Cross-Context Stress Detection: Evaluating Machine Learning Models on Heterogeneous Stress Scenarios Using EEG Signals
by Omneya Attallah, Mona Mamdouh and Ahmad Al-Kabbany
AI 2025, 6(4), 79; https://doi.org/10.3390/ai6040079 - 14 Apr 2025
Viewed by 401
Abstract
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, [...] Read more.
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, there has been limited research on assessing ML models trained in one context and utilized in another. The objective of ML-based stress detection systems is to create models that generalize across various contexts. Methods: This study examines the generalizability of ML models employing EEG recordings from two stress-inducing contexts: mental arithmetic evaluation (MAE) and virtual reality (VR) gaming. We present a data collection workflow and publicly release a portion of the dataset. Furthermore, we evaluate classical ML models and their generalizability, offering insights into the influence of training data on model performance, data efficiency, and related expenses. EEG data were acquired leveraging MUSE-STM hardware during stressful MAE and VR gaming scenarios. The methodology entailed preprocessing EEG signals using wavelet denoising mother wavelets, assessing individual and aggregated sensor data, and employing three ML models—linear discriminant analysis (LDA), support vector machine (SVM), and K-nearest neighbors (KNN)—for classification purposes. Results: In Scenario 1, where MAE was employed for training and VR for testing, the TP10 electrode attained an average accuracy of 91.42% across all classifiers and participants, whereas the SVM classifier achieved the highest average accuracy of 95.76% across all participants. In Scenario 2, adopting VR data as the training data and MAE data as the testing data, the maximum average accuracy achieved was 88.05% with the combination of TP10, AF8, and TP9 electrodes across all classifiers and participants, whereas the LDA model attained the peak average accuracy of 90.27% among all participants. The optimal performance was achieved with Symlets 4 and Daubechies-2 for Scenarios 1 and 2, respectively. Conclusions: The results demonstrate that although ML models exhibit generalization capabilities across stressors, their performance is significantly influenced by the alignment between training and testing contexts, as evidenced by systematic cross-context evaluations using an 80/20 train–test split per participant and quantitative metrics (accuracy, precision, recall, and F1-score) averaged across participants. The observed variations in performance across stress scenarios, classifiers, and EEG sensors provide empirical support for this claim. Full article
Show Figures

Figure 1

16 pages, 3435 KiB  
Article
A Combined Windowing and Deep Learning Model for the Classification of Brain Disorders Based on Electroencephalogram Signals
by Dina Abooelzahab, Nawal Zaher, Abdel Hamid Soliman and Claude Chibelushi
AI 2025, 6(3), 42; https://doi.org/10.3390/ai6030042 - 20 Feb 2025
Viewed by 834
Abstract
Background: The electroencephalogram (EEG) is essential for diagnosing and classifying brain disorders, enabling early medical intervention. Its ability to identify brain abnormalities has increased its clinical use in assessing changes in brain activity. Recent advancements in deep learning have introduced effective methods for [...] Read more.
Background: The electroencephalogram (EEG) is essential for diagnosing and classifying brain disorders, enabling early medical intervention. Its ability to identify brain abnormalities has increased its clinical use in assessing changes in brain activity. Recent advancements in deep learning have introduced effective methods for interpreting EEG signals, utilizing large datasets for enhanced accuracy. Objective: This study presents a deep learning-based model designed to classify EEG data with better accuracy compared to existing approaches. Methods: The model consists of three key components: data selection, feature extraction, and classification. Data selection employs a windowing technique, while the feature extraction and classification stages use a deep learning framework combining a convolutional neural network (CNN) and a Long Short-Term Memory (LSTM) network. The resulting architecture includes up to 18 layers. The model was evaluated using the Temple University Hospital (TUH) dataset, comprising data from 2785 patients, ensuring its applicability to real-world scenarios. Results: Comparative performance analysis shows that this approach surpasses existing methods in accuracy, sensitivity, and specificity. Conclusions: This study highlights the potential of deep learning in enhancing EEG signal interpretation, offering a pathway to more accurate and efficient diagnoses of brain disorders for clinical applications. Full article
Show Figures

Figure 1

13 pages, 2472 KiB  
Article
Ischemic Stroke Lesion Segmentation on Multiparametric CT Perfusion Maps Using Deep Neural Network
by Ankit Kandpal, Rakesh Kumar Gupta and Anup Singh
AI 2025, 6(1), 15; https://doi.org/10.3390/ai6010015 - 17 Jan 2025
Viewed by 1164
Abstract
Background: Accurate delineation of lesions in acute ischemic stroke is important for determining the extent of tissue damage and the identification of potentially salvageable brain tissues. Automatic segmentation on CT images is challenging due to the poor contrast-to-noise ratio. Quantitative CT perfusion images [...] Read more.
Background: Accurate delineation of lesions in acute ischemic stroke is important for determining the extent of tissue damage and the identification of potentially salvageable brain tissues. Automatic segmentation on CT images is challenging due to the poor contrast-to-noise ratio. Quantitative CT perfusion images improve the estimation of the perfusion deficit regions; however, they are limited by a poor signal-to-noise ratio. The study aims to investigate the potential of deep learning (DL) algorithms for the improved segmentation of ischemic lesions. Methods: This study proposes a novel DL architecture, DenseResU-NetCTPSS, for stroke segmentation using multiparametric CT perfusion images. The proposed network is benchmarked against state-of-the-art DL models. Its performance is assessed using the ISLES-2018 challenge dataset, a widely recognized dataset for stroke segmentation in CT images. The proposed network was evaluated on both training and test datasets. Results: The final optimized network takes three image sequences, namely CT, cerebral blood volume (CBV), and time to max (Tmax), as input to perform segmentation. The network achieved a dice score of 0.65 ± 0.19 and 0.45 ± 0.32 on the training and testing datasets. The model demonstrated a notable improvement over existing state-of-the-art DL models. Conclusions: The optimized model combines CT, CBV, and Tmax images, enabling automatic lesion identification with reasonable accuracy and aiding radiologists in faster, more objective assessments. Full article
Show Figures

Figure 1

17 pages, 863 KiB  
Article
Digital Diagnostics: The Potential of Large Language Models in Recognizing Symptoms of Common Illnesses
by Gaurav Kumar Gupta, Aditi Singh, Sijo Valayakkad Manikandan and Abul Ehtesham
AI 2025, 6(1), 13; https://doi.org/10.3390/ai6010013 - 16 Jan 2025
Cited by 1 | Viewed by 2586
Abstract
This study aimed to evaluate the potential of Large Language Models (LLMs) in healthcare diagnostics, specifically their ability to analyze symptom-based prompts and provide accurate diagnoses. The study focused on models including GPT-4, GPT-4o, Gemini, o1 Preview, and GPT-3.5, assessing their performance in [...] Read more.
This study aimed to evaluate the potential of Large Language Models (LLMs) in healthcare diagnostics, specifically their ability to analyze symptom-based prompts and provide accurate diagnoses. The study focused on models including GPT-4, GPT-4o, Gemini, o1 Preview, and GPT-3.5, assessing their performance in identifying illnesses based solely on provided symptoms. Symptom-based prompts were curated from reputable medical sources to ensure validity and relevance. Each model was tested under controlled conditions to evaluate their diagnostic accuracy, precision, recall, and decision-making capabilities. Specific scenarios were designed to explore their performance in both general and high-stakes diagnostic tasks. Among the models, GPT-4 achieved the highest diagnostic accuracy, demonstrating strong alignment with medical reasoning. Gemini excelled in high-stakes scenarios requiring precise decision-making. GPT-4o and o1 Preview showed balanced performance, effectively handling real-time diagnostic tasks with a focus on both precision and recall. GPT-3.5, though less advanced, proved dependable for general diagnostic tasks. This study highlights the strengths and limitations of LLMs in healthcare diagnostics. While models such as GPT-4 and Gemini exhibit promise, challenges such as privacy compliance, ethical considerations, and the mitigation of inherent biases must be addressed. The findings suggest pathways for responsibly integrating LLMs into diagnostic processes to enhance healthcare outcomes. Full article
Show Figures

Figure 1

20 pages, 3519 KiB  
Article
Attention-Based Hybrid Deep Learning Models for Classifying COVID-19 Genome Sequences
by A. M. Mutawa
AI 2025, 6(1), 4; https://doi.org/10.3390/ai6010004 - 2 Jan 2025
Viewed by 1120
Abstract
Background: COVID-19 genetic sequence research is crucial despite immunizations and pandemic control. COVID-19-causing SARS-CoV-2 must be understood genomically for several reasons. New viral strains may resist vaccines. Categorizing genetic sequences helps researchers track changes and assess immunization efficacy. Classifying COVID-19 genome sequences with [...] Read more.
Background: COVID-19 genetic sequence research is crucial despite immunizations and pandemic control. COVID-19-causing SARS-CoV-2 must be understood genomically for several reasons. New viral strains may resist vaccines. Categorizing genetic sequences helps researchers track changes and assess immunization efficacy. Classifying COVID-19 genome sequences with other viruses helps to understand its evolution and interactions with other illnesses. Methods: The proposed study introduces a deep learning-based COVID-19 genomic sequence categorization approach. Attention-based hybrid deep learning (DL) models categorize 1423 COVID-19 and 11,388 other viral genome sequences. An unknown dataset is also used to assess the models. The five models’ accuracy, f1-score, area under the curve (AUC), precision, Matthews correlation coefficient (MCC), and recall are evaluated. Results: The results indicate that the Convolutional neural network (CNN) with Bidirectional long short-term memory (BLSTM) with attention layer (CNN-BLSTM-Att) achieved an accuracy of 99.99%, which outperformed the other models. For external validation, the model shows an accuracy of 99.88%. It reveals that DL-based approaches with an attention layer can accurately classify COVID-19 genomic sequences with a high degree of accuracy. This method might assist in identifying and classifying COVID-19 virus strains in clinical situations. Immunizations have lowered COVID-19 danger, but categorizing its genetic sequences is crucial to global health activities to plan for recurrence or future viral threats. Full article
Show Figures

Graphical abstract

19 pages, 1770 KiB  
Article
Application of Conversational AI Models in Decision Making for Clinical Periodontology: Analysis and Predictive Modeling
by Albert Camlet, Aida Kusiak and Dariusz Świetlik
AI 2025, 6(1), 3; https://doi.org/10.3390/ai6010003 - 2 Jan 2025
Viewed by 1160
Abstract
(1) Background: Language represents a crucial ability of humans, enabling communication and collaboration. ChatGPT is an AI chatbot utilizing the GPT (Generative Pretrained Transformer) language model architecture, enabling the generation of human-like text. The aim of the research was to assess the effectiveness [...] Read more.
(1) Background: Language represents a crucial ability of humans, enabling communication and collaboration. ChatGPT is an AI chatbot utilizing the GPT (Generative Pretrained Transformer) language model architecture, enabling the generation of human-like text. The aim of the research was to assess the effectiveness of ChatGPT-3.5 and the latest version, ChatGPT-4, in responding to questions posed within the scope of a periodontology specialization exam. (2) Methods: Two certification examinations in periodontology, available in both English and Polish, comprising 120 multiple-choice questions, each in a single-best-answer format. The questions were additionally assigned to five types in accordance with the subject covered. These exams were utilized to evaluate the performance of ChatGPT-3.5 and ChatGPT-4. Logistic regression models were used to estimate the chances of correct answers regarding the type of question, exam session, AI model, and difficulty index. (3) Results: The percentages of correct answers obtained by ChatGPT-3.5 and ChatGPT-4 in the Spring 2023 session in Polish and English were 40.3% vs. 55.5% and 45.4% vs. 68.9%, respectively. The periodontology specialty examination test accuracy of ChatGPT-4 was significantly better than that of ChatGPT-3.5 for both sessions (p < 0.05). For the ChatGPT-4 spring session, it was significantly more effective in the English language (p = 0.0325) due to the lack of statistically significant differences for ChatGPT-3.5. In the case of ChatGPT-3.5 and ChatGPT-4, incorrect responses showed notably lower difficulty index values during the Spring 2023 session in English and Polish (p < 0.05). (4) Conclusions: ChatGPT-4 exceeded the 60% threshold and passed the examination in the Spring 2023 session in the English version. In general, ChatGPT-4 performed better than ChatGPT-3.5, achieving significantly better results in the Spring 2023 test in the Polish and English versions. Full article
Show Figures

Figure 1

32 pages, 1448 KiB  
Article
Early Detection and Classification of Diabetic Retinopathy: A Deep Learning Approach
by Mustafa Youldash, Atta Rahman, Manar Alsayed, Abrar Sebiany, Joury Alzayat, Noor Aljishi, Ghaida Alshammari and Mona Alqahtani
AI 2024, 5(4), 2586-2617; https://doi.org/10.3390/ai5040125 - 29 Nov 2024
Cited by 2 | Viewed by 2953
Abstract
Background—Diabetes is a rapidly spreading chronic disease that poses a significant risk to individual health as the population grows. This increase is largely attributed to busy lifestyles, unhealthy eating habits, and a lack of awareness about the disease. Diabetes impacts the human [...] Read more.
Background—Diabetes is a rapidly spreading chronic disease that poses a significant risk to individual health as the population grows. This increase is largely attributed to busy lifestyles, unhealthy eating habits, and a lack of awareness about the disease. Diabetes impacts the human body in various ways, one of the most serious being diabetic retinopathy (DR), which can result in severely reduced vision or even blindness if left untreated. Therefore, an effective early detection and diagnosis system is essential. As part of the Kingdom of Saudi Arabia’s Vision 2030 initiative, which emphasizes the importance of digital transformation in the healthcare sector, it is vital to equip healthcare professionals with effective tools for diagnosing DR. This not only ensures high-quality patient care but also results in cost savings and contributes to the kingdom’s economic growth, as the traditional process of diagnosing diabetic retinopathy can be both time-consuming and expensive. Methods—Artificial intelligence (AI), particularly deep learning, has played an important role in various areas of human life, especially in healthcare. This study leverages AI technology, specifically deep learning, to achieve two primary objectives: binary classification to determine whether a patient has DR, and multi-class classification to identify the stage of DR accurately and in a timely manner. The proposed model utilizes six pre-trained convolutional neural networks (CNNs): EfficientNetB3, EfficientNetV2B1, RegNetX008, RegNetX080, RegNetY006, and RegNetY008. In our study, we conducted two experiments. In the first experiment, we trained and evaluated different models using fundus images from the publicly available APTOS dataset. Results—The RegNetX080 model achieved 98.6% accuracy in binary classification, while the EfficientNetB3 model achieved 85.1% accuracy in multi-classification, respectively. For the second experiment, we trained the models using the APTOS dataset and evaluated them using fundus images from Al-Saif Medical Center in Saudi Arabia. In this experiment, EfficientNetB3 achieved 98.2% accuracy in binary classification and EfficientNetV2B1 achieved 84.4% accuracy in multi-classification, respectively. Conclusions—These results indicate the potential of AI technology for early and accurate detection and classification of DR. The study is a potential contribution towards improved healthcare and clinical decision support for an early detection of DR in Saudi Arabia. Full article
Show Figures

Figure 1

19 pages, 2777 KiB  
Article
Generative Models Utilizing Padding Can Efficiently Integrate and Generate Multi-Omics Data
by Hyeon-Su Lee, Seung-Hwan Hong, Gwan-Heon Kim, Hye-Jin You, Eun-Young Lee, Jae-Hwan Jeong, Jin-Woo Ahn and June-Hyuk Kim
AI 2024, 5(3), 1614-1632; https://doi.org/10.3390/ai5030078 - 5 Sep 2024
Viewed by 1745
Abstract
Technological advances in information-processing capacity have enabled integrated analyses (multi-omics) of different omics data types, improving target discovery and clinical diagnosis. This study proposes novel artificial intelligence (AI) learning strategies for incomplete datasets, common in omics research. The model comprises (1) a multi-omics [...] Read more.
Technological advances in information-processing capacity have enabled integrated analyses (multi-omics) of different omics data types, improving target discovery and clinical diagnosis. This study proposes novel artificial intelligence (AI) learning strategies for incomplete datasets, common in omics research. The model comprises (1) a multi-omics generative model based on a variational auto-encoder that learns tumor genetic patterns based on different omics data types and (2) an expanded classification model that predicts cancer phenotypes. Padding was applied to replace missing data with virtual data. The embedding data generated by the model accurately classified cancer phenotypes, addressing the class imbalance issue (weighted F1 score: cancer type > 0.95, primary site > 0.92, sample type > 0.97). The classification performance was maintained in the absence of omics data, and the virtual data resembled actual omics data (cosine similarity mRNA gene expression > 0.96, mRNA isoform expression > 0.95, DNA methylation > 0.96). Meanwhile, in the presence of omics data, high-quality, non-existent omics data were generated (cosine similarity mRNA gene expression: 0.9702, mRNA isoform expression: 0.9546, DNA methylation: 0.9687). This model can effectively classify cancer phenotypes based on incomplete omics data with data sparsity robustness, generating omics data through deep learning and enabling precision medicine. Full article
Show Figures

Figure 1

Back to TopTop