Artificial Intelligence in Biomedical Engineering: Challenges and Developments

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "Medical & Healthcare AI".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 20003

Special Issue Editor


E-Mail Website
Guest Editor
Biomedical Engineering Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, 9, Iroon Polytechniou Street, Zografos, 15780 Athens, Greece
Interests: transmission of nerve stimuli; study of cognitive systems and processes; medical image and signal processing; AI for diagnosis and therapy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue titled “Artificial Intelligence in Biomedical Engineering: Challenges and Developments” explores the integration of AI technologies into the field of biomedical engineering. With a focus on applications such as medical image analysis, disease diagnosis, and personalized medicine, this issue provides a platform for researchers to showcase recent advancements and address current challenges. By fostering interdisciplinary dialogue and collaboration, it aims to accelerate innovation in healthcare and contribute to the ongoing evolution of AI-driven biomedical engineering.

Focus: The focus of the Special Issue “Artificial Intelligence in Biomedical Engineering: Challenges and Developments” is to explore the intersection of artificial intelligence (AI) and biomedical engineering. This involves understanding how AI technologies can be applied in the field of biomedical engineering to address various challenges and foster development.

Scope: The scope encompasses a wide range of topics within the realm of AI in biomedical engineering. This may include, but is not limited to, the following:

  • Application of machine learning and deep learning algorithms in medical image analysis.
  • AI-driven approaches for disease diagnosis and prognosis.
  • Utilization of AI techniques in healthcare data analytics and personalized medicine.
  • Development of AI-based medical devices and systems.
  • Ethical considerations and societal impacts of AI adoption in healthcare.
  • Explainability in medical deep learning approaches.

Purpose: The purpose of this Special Issue is to provide a platform for researchers, practitioners, and experts in both AI and biomedical engineering to share their insights, experiences, and latest research findings. By doing so, this Special Issue aims to:

  • Highlight the current challenges and opportunities in applying AI to biomedical engineering problems.
  • Showcase recent developments, innovations, and breakthroughs in the field.
  • Foster collaboration and interdisciplinary exchange between researchers in AI and biomedical engineering.
  • Stimulate further research and advancements in this rapidly evolving domain.

This Special Issue will supplement existing literature on AI in biomedical engineering in several ways:

  • Comprehensive Coverage: By addressing a wide range of topics, this Special Issue will provide a comprehensive overview of the latest advancements and challenges in the field, filling potential gaps in existing literature.
  • Cutting-Edge Research: It will feature original research articles, reviews, and case studies that present novel approaches, methodologies, and applications of AI in biomedical engineering, contributing new insights to the existing body of knowledge.
  • Interdisciplinary Perspective: As AI in biomedical engineering requires expertise from both AI and biomedical engineering domains, this Special Issue will facilitate interdisciplinary dialogue and collaboration, bridging the gap between these two fields.
  • Emerging Trends: By focusing on recent developments and emerging trends, this Special Issue will keep readers abreast of the latest advancements and technological innovations in AI-driven healthcare, supplementing the existing literature with up-to-date information.

Dr. Ioannis Kakkos
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • biomedical engineering
  • medical image analysis
  • disease diagnosis
  • personalized medicine
  • machine learning
  • deep learning
  • healthcare data analytics
  • medical devices

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 6210 KiB  
Article
GT-STAFG: Graph Transformer with Spatiotemporal Attention Fusion Gate for Epileptic Seizure Detection in Imbalanced EEG Data
by Mohamed Sami Nafea and Zool Hilmi Ismail
AI 2025, 6(6), 120; https://doi.org/10.3390/ai6060120 - 9 Jun 2025
Abstract
Background: Electroencephalography (EEG) assists clinicians in diagnosing epileptic seizures by recording brain electrical activity. Existing models process spatiotemporal features inefficiently either through cascaded spatiotemporal architectures or static functional connectivity, limiting their ability to capture deeper spatial–temporal correlations. Objectives: To address these limitations, we [...] Read more.
Background: Electroencephalography (EEG) assists clinicians in diagnosing epileptic seizures by recording brain electrical activity. Existing models process spatiotemporal features inefficiently either through cascaded spatiotemporal architectures or static functional connectivity, limiting their ability to capture deeper spatial–temporal correlations. Objectives: To address these limitations, we propose a Graph Transformer with Spatiotemporal Attention Fusion Gate (GT-STAFG). Methods: We analyzed 18-channel EEG data sampled at 200 Hz, transformed into the frequency domain, and segmented into 30- second windows. The graph transformer exploits dynamic graph data, while STAFG leverages self-attention and gating mechanisms to capture complex interactions by augmenting graph features with both spatial and temporal information. The clinical significance of extracted features was validated using the Integrated Gradients attribution method, emphasizing the clinical relevance of the proposed model. Results: GT-STAFG achieves the highest area under the precision–recall curve (AUPRC) scores of 0.605 on the TUSZ dataset and 0.498 on the CHB-MIT dataset, surpassing baseline models and demonstrating strong cross-patient generalization on imbalanced datasets. We applied transfer learning to leverage knowledge from the TUSZ dataset when analyzing the CHB-MIT dataset, yielding an average improvement of 8.3 percentage points in AUPRC. Conclusions: Our approach has the potential to enhance patient outcomes and optimize healthcare utilization. Full article
Show Figures

Figure 1

33 pages, 1023 KiB  
Article
Artificial Intelligence in Healthcare: How to Develop and Implement Safe, Ethical and Trustworthy AI Systems
by Sasa Jenko, Elsa Papadopoulou, Vikas Kumar, Steven S. Overman, Katarina Krepelkova, Joseph Wilson, Elizabeth L. Dunbar, Carolin Spice and Themis Exarchos
AI 2025, 6(6), 116; https://doi.org/10.3390/ai6060116 - 6 Jun 2025
Viewed by 335
Abstract
Background/Objectives: Artificial intelligence (AI) is increasingly integrated into everyday life, including the complex and highly regulated healthcare sector. Given healthcare’s essential role in safeguarding human life and well-being, AI deployment requires careful oversight to ensure safety, effectiveness, and ethical compliance. This paper aims [...] Read more.
Background/Objectives: Artificial intelligence (AI) is increasingly integrated into everyday life, including the complex and highly regulated healthcare sector. Given healthcare’s essential role in safeguarding human life and well-being, AI deployment requires careful oversight to ensure safety, effectiveness, and ethical compliance. This paper aims to examine the current regulatory landscapes governing AI in healthcare, particularly in the European Union (EU) and the United States (USA), and to propose practical tools to support the responsible development and implementation of AI systems. Methods: The study reviews key regulatory frameworks, ethical guidelines, and expert recommendations from international bodies, professional associations, and governmental institutions in the EU and USA. Based on this analysis, the paper develops structured questionnaires tailored for AI developers and implementers to help operationalize regulatory and ethical expectations. Results: The proposed questionnaires address critical gaps in existing frameworks by providing actionable, lifecycle-oriented tools that span AI development, deployment, and clinical use. These instruments support compliance and ethical integrity while promoting transparency and accountability. Conclusions: The structured questionnaires can serve as practical tools for health technology assessments, public procurement, accreditation processes, and training initiatives. By aligning AI system design with regulatory and ethical standards, they contribute to building trustworthy, safe, and innovative AI applications in healthcare. Full article
Show Figures

Figure 1

15 pages, 3856 KiB  
Article
EEG-Based Assessment of Cognitive Resilience via Interpretable Machine Learning Models
by Ioannis Kakkos, Elias Tzavellas, Eleni Feleskoura, Stamatis Mourtakos, Eleftherios Kontopodis, Ioannis Vezakis, Theodosis Kalamatianos, Emmanouil Synadinakis, George K. Matsopoulos, Ioannis Kalatzis, Errikos M. Ventouras and Aikaterini Skouroliakou
AI 2025, 6(6), 112; https://doi.org/10.3390/ai6060112 - 29 May 2025
Viewed by 437
Abstract
Background: Cognitive resilience is a critical factor in high-performance environments such as military operations, where sustained stress can impair attention and decision-making. In the present study, we utilized EEG and machine learning to assess cognitive resilience in elite military personnel. Methods: For this [...] Read more.
Background: Cognitive resilience is a critical factor in high-performance environments such as military operations, where sustained stress can impair attention and decision-making. In the present study, we utilized EEG and machine learning to assess cognitive resilience in elite military personnel. Methods: For this purpose, EEG signals were recorded from elite military personnel during stress-inducing attention-related and emotional tasks. The EEG signals were segmented into two temporal windows corresponding to the initial stress response (baseline) and the adaptive/recovery phase, extracting power spectral density features across delta, theta, alpha, beta, and gamma bands. Different machine learning models (Decision Tree, Random Forest, AdaBoost, XGBoost) were trained to classify temporal phases. Results: XGBoost achieved the highest accuracy (0.95), while Shapley Additive Explanations (SHAP) analysis identified delta and alpha bands (particularly in frontal and parietal regions) as key features associated with adaptive mental states. Conclusions: Our findings indicate that resilience-related neural responses can be successfully distinguished and that interpretable AI frameworks can be used for monitoring cognitive adaptation in high-stress environments. Full article
Show Figures

Figure 1

35 pages, 5913 KiB  
Article
Embedding Fear in Medical AI: A Risk-Averse Framework for Safety and Ethics
by Andrej Thurzo and Vladimír Thurzo
AI 2025, 6(5), 101; https://doi.org/10.3390/ai6050101 - 14 May 2025
Viewed by 769
Abstract
In today’s high-stakes arenas—from healthcare to defense—algorithms are advancing at an unprecedented pace, yet they still lack a crucial element of human decision-making: an instinctive caution that helps prevent harm. Inspired by both the protective reflexes seen in military robotics and the human [...] Read more.
In today’s high-stakes arenas—from healthcare to defense—algorithms are advancing at an unprecedented pace, yet they still lack a crucial element of human decision-making: an instinctive caution that helps prevent harm. Inspired by both the protective reflexes seen in military robotics and the human amygdala’s role in threat detection, we introduce a novel idea: an integrated module that acts as an internal “caution system”. This module does not experience emotion in the human sense; rather, it serves as an embedded safeguard that continuously assesses uncertainty and triggers protective measures whenever potential dangers arise. Our proposed framework combines several established techniques. It uses Bayesian methods to continuously estimate the likelihood of adverse outcomes, applies reinforcement learning strategies with penalties for choices that might lead to harmful results, and incorporates layers of human oversight to review decisions when needed. The result is a system that mirrors the prudence and measured judgment of experienced clinicians—hesitating and recalibrating its actions when the data are ambiguous, much like a doctor would rely on both intuition and expertise to prevent errors. We call on computer scientists, healthcare professionals, and policymakers to collaborate in refining and testing this approach. Through joint research, pilot projects, and robust regulatory guidelines, we aim to ensure that advanced computational systems can combine speed and precision with an inherent predisposition toward protecting human life. Ultimately, by embedding this cautionary module, the framework is expected to significantly reduce AI-induced risks and enhance patient safety and trust in medical AI systems. It seems inevitable for future superintelligent AI systems in medicine to possess emotion-like processes. Full article
Show Figures

Figure 1

20 pages, 2343 KiB  
Article
Robust Single-Cell RNA-Seq Analysis Using Hyperdimensional Computing: Enhanced Clustering and Classification Methods
by Hossein Mohammadi, Maziyar Baranpouyan, Krishnaprasad Thirunarayan and Lingwei Chen
AI 2025, 6(5), 94; https://doi.org/10.3390/ai6050094 - 1 May 2025
Viewed by 593
Abstract
Background. Single-cell RNA sequencing (scRNA-seq) has transformed genomics by enabling the study of cellular heterogeneity. However, its high dimensionality, noise, and sparsity pose significant challenges for data analysis. Methods. We investigate the use of Hyperdimensional Computing (HDC), a brain-inspired computational framework recognized for [...] Read more.
Background. Single-cell RNA sequencing (scRNA-seq) has transformed genomics by enabling the study of cellular heterogeneity. However, its high dimensionality, noise, and sparsity pose significant challenges for data analysis. Methods. We investigate the use of Hyperdimensional Computing (HDC), a brain-inspired computational framework recognized for its noise robustness and hardware efficiency, to tackle the challenges in scRNA-seq data analysis. We apply HDC to both supervised classification and unsupervised clustering tasks. Results. Our experiments demonstrate that HDC consistently outperforms established methods such as XGBoost, Seurat reference mapping, and scANVI in terms of noise tolerance and scalability. HDC achieves superior accuracy in classification tasks and maintains robust clustering performance across varying noise levels. Conclusions. These results highlight HDC as a promising framework for accurate and efficient single-cell data analysis. Its potential extends to other high-dimensional biological datasets including proteomics, epigenomics, and transcriptomics, with implications for advancing bioinformatics and personalized medicine. Full article
Show Figures

Figure 1

38 pages, 7211 KiB  
Article
Cross-Context Stress Detection: Evaluating Machine Learning Models on Heterogeneous Stress Scenarios Using EEG Signals
by Omneya Attallah, Mona Mamdouh and Ahmad Al-Kabbany
AI 2025, 6(4), 79; https://doi.org/10.3390/ai6040079 - 14 Apr 2025
Viewed by 819
Abstract
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, [...] Read more.
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, there has been limited research on assessing ML models trained in one context and utilized in another. The objective of ML-based stress detection systems is to create models that generalize across various contexts. Methods: This study examines the generalizability of ML models employing EEG recordings from two stress-inducing contexts: mental arithmetic evaluation (MAE) and virtual reality (VR) gaming. We present a data collection workflow and publicly release a portion of the dataset. Furthermore, we evaluate classical ML models and their generalizability, offering insights into the influence of training data on model performance, data efficiency, and related expenses. EEG data were acquired leveraging MUSE-STM hardware during stressful MAE and VR gaming scenarios. The methodology entailed preprocessing EEG signals using wavelet denoising mother wavelets, assessing individual and aggregated sensor data, and employing three ML models—linear discriminant analysis (LDA), support vector machine (SVM), and K-nearest neighbors (KNN)—for classification purposes. Results: In Scenario 1, where MAE was employed for training and VR for testing, the TP10 electrode attained an average accuracy of 91.42% across all classifiers and participants, whereas the SVM classifier achieved the highest average accuracy of 95.76% across all participants. In Scenario 2, adopting VR data as the training data and MAE data as the testing data, the maximum average accuracy achieved was 88.05% with the combination of TP10, AF8, and TP9 electrodes across all classifiers and participants, whereas the LDA model attained the peak average accuracy of 90.27% among all participants. The optimal performance was achieved with Symlets 4 and Daubechies-2 for Scenarios 1 and 2, respectively. Conclusions: The results demonstrate that although ML models exhibit generalization capabilities across stressors, their performance is significantly influenced by the alignment between training and testing contexts, as evidenced by systematic cross-context evaluations using an 80/20 train–test split per participant and quantitative metrics (accuracy, precision, recall, and F1-score) averaged across participants. The observed variations in performance across stress scenarios, classifiers, and EEG sensors provide empirical support for this claim. Full article
Show Figures

Figure 1

16 pages, 3435 KiB  
Article
A Combined Windowing and Deep Learning Model for the Classification of Brain Disorders Based on Electroencephalogram Signals
by Dina Abooelzahab, Nawal Zaher, Abdel Hamid Soliman and Claude Chibelushi
AI 2025, 6(3), 42; https://doi.org/10.3390/ai6030042 - 20 Feb 2025
Viewed by 1078
Abstract
Background: The electroencephalogram (EEG) is essential for diagnosing and classifying brain disorders, enabling early medical intervention. Its ability to identify brain abnormalities has increased its clinical use in assessing changes in brain activity. Recent advancements in deep learning have introduced effective methods for [...] Read more.
Background: The electroencephalogram (EEG) is essential for diagnosing and classifying brain disorders, enabling early medical intervention. Its ability to identify brain abnormalities has increased its clinical use in assessing changes in brain activity. Recent advancements in deep learning have introduced effective methods for interpreting EEG signals, utilizing large datasets for enhanced accuracy. Objective: This study presents a deep learning-based model designed to classify EEG data with better accuracy compared to existing approaches. Methods: The model consists of three key components: data selection, feature extraction, and classification. Data selection employs a windowing technique, while the feature extraction and classification stages use a deep learning framework combining a convolutional neural network (CNN) and a Long Short-Term Memory (LSTM) network. The resulting architecture includes up to 18 layers. The model was evaluated using the Temple University Hospital (TUH) dataset, comprising data from 2785 patients, ensuring its applicability to real-world scenarios. Results: Comparative performance analysis shows that this approach surpasses existing methods in accuracy, sensitivity, and specificity. Conclusions: This study highlights the potential of deep learning in enhancing EEG signal interpretation, offering a pathway to more accurate and efficient diagnoses of brain disorders for clinical applications. Full article
Show Figures

Figure 1

13 pages, 2472 KiB  
Article
Ischemic Stroke Lesion Segmentation on Multiparametric CT Perfusion Maps Using Deep Neural Network
by Ankit Kandpal, Rakesh Kumar Gupta and Anup Singh
AI 2025, 6(1), 15; https://doi.org/10.3390/ai6010015 - 17 Jan 2025
Viewed by 1447
Abstract
Background: Accurate delineation of lesions in acute ischemic stroke is important for determining the extent of tissue damage and the identification of potentially salvageable brain tissues. Automatic segmentation on CT images is challenging due to the poor contrast-to-noise ratio. Quantitative CT perfusion images [...] Read more.
Background: Accurate delineation of lesions in acute ischemic stroke is important for determining the extent of tissue damage and the identification of potentially salvageable brain tissues. Automatic segmentation on CT images is challenging due to the poor contrast-to-noise ratio. Quantitative CT perfusion images improve the estimation of the perfusion deficit regions; however, they are limited by a poor signal-to-noise ratio. The study aims to investigate the potential of deep learning (DL) algorithms for the improved segmentation of ischemic lesions. Methods: This study proposes a novel DL architecture, DenseResU-NetCTPSS, for stroke segmentation using multiparametric CT perfusion images. The proposed network is benchmarked against state-of-the-art DL models. Its performance is assessed using the ISLES-2018 challenge dataset, a widely recognized dataset for stroke segmentation in CT images. The proposed network was evaluated on both training and test datasets. Results: The final optimized network takes three image sequences, namely CT, cerebral blood volume (CBV), and time to max (Tmax), as input to perform segmentation. The network achieved a dice score of 0.65 ± 0.19 and 0.45 ± 0.32 on the training and testing datasets. The model demonstrated a notable improvement over existing state-of-the-art DL models. Conclusions: The optimized model combines CT, CBV, and Tmax images, enabling automatic lesion identification with reasonable accuracy and aiding radiologists in faster, more objective assessments. Full article
Show Figures

Figure 1

17 pages, 863 KiB  
Article
Digital Diagnostics: The Potential of Large Language Models in Recognizing Symptoms of Common Illnesses
by Gaurav Kumar Gupta, Aditi Singh, Sijo Valayakkad Manikandan and Abul Ehtesham
AI 2025, 6(1), 13; https://doi.org/10.3390/ai6010013 - 16 Jan 2025
Cited by 1 | Viewed by 3172
Abstract
This study aimed to evaluate the potential of Large Language Models (LLMs) in healthcare diagnostics, specifically their ability to analyze symptom-based prompts and provide accurate diagnoses. The study focused on models including GPT-4, GPT-4o, Gemini, o1 Preview, and GPT-3.5, assessing their performance in [...] Read more.
This study aimed to evaluate the potential of Large Language Models (LLMs) in healthcare diagnostics, specifically their ability to analyze symptom-based prompts and provide accurate diagnoses. The study focused on models including GPT-4, GPT-4o, Gemini, o1 Preview, and GPT-3.5, assessing their performance in identifying illnesses based solely on provided symptoms. Symptom-based prompts were curated from reputable medical sources to ensure validity and relevance. Each model was tested under controlled conditions to evaluate their diagnostic accuracy, precision, recall, and decision-making capabilities. Specific scenarios were designed to explore their performance in both general and high-stakes diagnostic tasks. Among the models, GPT-4 achieved the highest diagnostic accuracy, demonstrating strong alignment with medical reasoning. Gemini excelled in high-stakes scenarios requiring precise decision-making. GPT-4o and o1 Preview showed balanced performance, effectively handling real-time diagnostic tasks with a focus on both precision and recall. GPT-3.5, though less advanced, proved dependable for general diagnostic tasks. This study highlights the strengths and limitations of LLMs in healthcare diagnostics. While models such as GPT-4 and Gemini exhibit promise, challenges such as privacy compliance, ethical considerations, and the mitigation of inherent biases must be addressed. The findings suggest pathways for responsibly integrating LLMs into diagnostic processes to enhance healthcare outcomes. Full article
Show Figures

Figure 1

20 pages, 3519 KiB  
Article
Attention-Based Hybrid Deep Learning Models for Classifying COVID-19 Genome Sequences
by A. M. Mutawa
AI 2025, 6(1), 4; https://doi.org/10.3390/ai6010004 - 2 Jan 2025
Viewed by 1381
Abstract
Background: COVID-19 genetic sequence research is crucial despite immunizations and pandemic control. COVID-19-causing SARS-CoV-2 must be understood genomically for several reasons. New viral strains may resist vaccines. Categorizing genetic sequences helps researchers track changes and assess immunization efficacy. Classifying COVID-19 genome sequences with [...] Read more.
Background: COVID-19 genetic sequence research is crucial despite immunizations and pandemic control. COVID-19-causing SARS-CoV-2 must be understood genomically for several reasons. New viral strains may resist vaccines. Categorizing genetic sequences helps researchers track changes and assess immunization efficacy. Classifying COVID-19 genome sequences with other viruses helps to understand its evolution and interactions with other illnesses. Methods: The proposed study introduces a deep learning-based COVID-19 genomic sequence categorization approach. Attention-based hybrid deep learning (DL) models categorize 1423 COVID-19 and 11,388 other viral genome sequences. An unknown dataset is also used to assess the models. The five models’ accuracy, f1-score, area under the curve (AUC), precision, Matthews correlation coefficient (MCC), and recall are evaluated. Results: The results indicate that the Convolutional neural network (CNN) with Bidirectional long short-term memory (BLSTM) with attention layer (CNN-BLSTM-Att) achieved an accuracy of 99.99%, which outperformed the other models. For external validation, the model shows an accuracy of 99.88%. It reveals that DL-based approaches with an attention layer can accurately classify COVID-19 genomic sequences with a high degree of accuracy. This method might assist in identifying and classifying COVID-19 virus strains in clinical situations. Immunizations have lowered COVID-19 danger, but categorizing its genetic sequences is crucial to global health activities to plan for recurrence or future viral threats. Full article
Show Figures

Graphical abstract

19 pages, 1770 KiB  
Article
Application of Conversational AI Models in Decision Making for Clinical Periodontology: Analysis and Predictive Modeling
by Albert Camlet, Aida Kusiak and Dariusz Świetlik
AI 2025, 6(1), 3; https://doi.org/10.3390/ai6010003 - 2 Jan 2025
Cited by 1 | Viewed by 1344
Abstract
(1) Background: Language represents a crucial ability of humans, enabling communication and collaboration. ChatGPT is an AI chatbot utilizing the GPT (Generative Pretrained Transformer) language model architecture, enabling the generation of human-like text. The aim of the research was to assess the effectiveness [...] Read more.
(1) Background: Language represents a crucial ability of humans, enabling communication and collaboration. ChatGPT is an AI chatbot utilizing the GPT (Generative Pretrained Transformer) language model architecture, enabling the generation of human-like text. The aim of the research was to assess the effectiveness of ChatGPT-3.5 and the latest version, ChatGPT-4, in responding to questions posed within the scope of a periodontology specialization exam. (2) Methods: Two certification examinations in periodontology, available in both English and Polish, comprising 120 multiple-choice questions, each in a single-best-answer format. The questions were additionally assigned to five types in accordance with the subject covered. These exams were utilized to evaluate the performance of ChatGPT-3.5 and ChatGPT-4. Logistic regression models were used to estimate the chances of correct answers regarding the type of question, exam session, AI model, and difficulty index. (3) Results: The percentages of correct answers obtained by ChatGPT-3.5 and ChatGPT-4 in the Spring 2023 session in Polish and English were 40.3% vs. 55.5% and 45.4% vs. 68.9%, respectively. The periodontology specialty examination test accuracy of ChatGPT-4 was significantly better than that of ChatGPT-3.5 for both sessions (p < 0.05). For the ChatGPT-4 spring session, it was significantly more effective in the English language (p = 0.0325) due to the lack of statistically significant differences for ChatGPT-3.5. In the case of ChatGPT-3.5 and ChatGPT-4, incorrect responses showed notably lower difficulty index values during the Spring 2023 session in English and Polish (p < 0.05). (4) Conclusions: ChatGPT-4 exceeded the 60% threshold and passed the examination in the Spring 2023 session in the English version. In general, ChatGPT-4 performed better than ChatGPT-3.5, achieving significantly better results in the Spring 2023 test in the Polish and English versions. Full article
Show Figures

Figure 1

32 pages, 1448 KiB  
Article
Early Detection and Classification of Diabetic Retinopathy: A Deep Learning Approach
by Mustafa Youldash, Atta Rahman, Manar Alsayed, Abrar Sebiany, Joury Alzayat, Noor Aljishi, Ghaida Alshammari and Mona Alqahtani
AI 2024, 5(4), 2586-2617; https://doi.org/10.3390/ai5040125 - 29 Nov 2024
Cited by 2 | Viewed by 3685
Abstract
Background—Diabetes is a rapidly spreading chronic disease that poses a significant risk to individual health as the population grows. This increase is largely attributed to busy lifestyles, unhealthy eating habits, and a lack of awareness about the disease. Diabetes impacts the human [...] Read more.
Background—Diabetes is a rapidly spreading chronic disease that poses a significant risk to individual health as the population grows. This increase is largely attributed to busy lifestyles, unhealthy eating habits, and a lack of awareness about the disease. Diabetes impacts the human body in various ways, one of the most serious being diabetic retinopathy (DR), which can result in severely reduced vision or even blindness if left untreated. Therefore, an effective early detection and diagnosis system is essential. As part of the Kingdom of Saudi Arabia’s Vision 2030 initiative, which emphasizes the importance of digital transformation in the healthcare sector, it is vital to equip healthcare professionals with effective tools for diagnosing DR. This not only ensures high-quality patient care but also results in cost savings and contributes to the kingdom’s economic growth, as the traditional process of diagnosing diabetic retinopathy can be both time-consuming and expensive. Methods—Artificial intelligence (AI), particularly deep learning, has played an important role in various areas of human life, especially in healthcare. This study leverages AI technology, specifically deep learning, to achieve two primary objectives: binary classification to determine whether a patient has DR, and multi-class classification to identify the stage of DR accurately and in a timely manner. The proposed model utilizes six pre-trained convolutional neural networks (CNNs): EfficientNetB3, EfficientNetV2B1, RegNetX008, RegNetX080, RegNetY006, and RegNetY008. In our study, we conducted two experiments. In the first experiment, we trained and evaluated different models using fundus images from the publicly available APTOS dataset. Results—The RegNetX080 model achieved 98.6% accuracy in binary classification, while the EfficientNetB3 model achieved 85.1% accuracy in multi-classification, respectively. For the second experiment, we trained the models using the APTOS dataset and evaluated them using fundus images from Al-Saif Medical Center in Saudi Arabia. In this experiment, EfficientNetB3 achieved 98.2% accuracy in binary classification and EfficientNetV2B1 achieved 84.4% accuracy in multi-classification, respectively. Conclusions—These results indicate the potential of AI technology for early and accurate detection and classification of DR. The study is a potential contribution towards improved healthcare and clinical decision support for an early detection of DR in Saudi Arabia. Full article
Show Figures

Figure 1

19 pages, 2777 KiB  
Article
Generative Models Utilizing Padding Can Efficiently Integrate and Generate Multi-Omics Data
by Hyeon-Su Lee, Seung-Hwan Hong, Gwan-Heon Kim, Hye-Jin You, Eun-Young Lee, Jae-Hwan Jeong, Jin-Woo Ahn and June-Hyuk Kim
AI 2024, 5(3), 1614-1632; https://doi.org/10.3390/ai5030078 - 5 Sep 2024
Viewed by 1911
Abstract
Technological advances in information-processing capacity have enabled integrated analyses (multi-omics) of different omics data types, improving target discovery and clinical diagnosis. This study proposes novel artificial intelligence (AI) learning strategies for incomplete datasets, common in omics research. The model comprises (1) a multi-omics [...] Read more.
Technological advances in information-processing capacity have enabled integrated analyses (multi-omics) of different omics data types, improving target discovery and clinical diagnosis. This study proposes novel artificial intelligence (AI) learning strategies for incomplete datasets, common in omics research. The model comprises (1) a multi-omics generative model based on a variational auto-encoder that learns tumor genetic patterns based on different omics data types and (2) an expanded classification model that predicts cancer phenotypes. Padding was applied to replace missing data with virtual data. The embedding data generated by the model accurately classified cancer phenotypes, addressing the class imbalance issue (weighted F1 score: cancer type > 0.95, primary site > 0.92, sample type > 0.97). The classification performance was maintained in the absence of omics data, and the virtual data resembled actual omics data (cosine similarity mRNA gene expression > 0.96, mRNA isoform expression > 0.95, DNA methylation > 0.96). Meanwhile, in the presence of omics data, high-quality, non-existent omics data were generated (cosine similarity mRNA gene expression: 0.9702, mRNA isoform expression: 0.9546, DNA methylation: 0.9687). This model can effectively classify cancer phenotypes based on incomplete omics data with data sparsity robustness, generating omics data through deep learning and enabling precision medicine. Full article
Show Figures

Figure 1

Back to TopTop