AI in Bio and Healthcare Informatics

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "Medical & Healthcare AI".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 13006

Special Issue Editors


E-Mail Website
Guest Editor
Divison of Electronics Engineering and Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju-si, Korea
Interests: memristor and memristive system; neuromorphic engineering; neural networks
Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0810, Australia
Interests: AI-based health informatics; blockchain; cybersecurity

E-Mail Website
Guest Editor
Department of Artificial Intelligence, Kyungdong University, 46 Bongpo 4-gil, Goseong-gun, Wonju 24764, Gangwon-do, Republic of Korea
Interests: computer vision; neural networks; deep learning

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) is transforming bioinformatics and healthcare informatics by providing innovative solutions for analyzing large-scale biological and medical data, enhancing clinical decision-making, personalizing treatment, and optimizing healthcare systems. Natural language processing (NLP), computer vision, deep learning (DL), and machine learning (ML) have facilitated AI-driven breakthroughs in drug discovery, genomics, medical imaging, and healthcare analytics. This Special Issue emphasizes integrating theoretical AI research into real-world clinical implementation by exploring cutting-edge AI applications and methodologies that address critical bioinformatics and healthcare informatics challenges. The scope of this Special Issue encompasses AI-powered approaches for integrating and analyzing multi-omics data, AI-driven diagnostics, disease prediction models, clinical decision support systems, personalized treatment strategies, medical imaging analysis, precision medicine, public health informatics, computational biology, electronic health record (EHR) analytics, disease surveillance, wearable medical technology, and ethical issues related to the use of AI in healthcare. Even though AI applications, such as deep learning for medical imaging and machine learning models for genetic data interpretation, have been well addressed in the literature, there is an increasing demand for a comprehensive perspective that unifies AI applications across the bioinformatics and healthcare domains. The existing literature often focuses on narrow AI-driven solutions, leaving gaps in understanding how AI might be used holistically to integrate multi-omics data, provide personalized healthcare, optimize hospital workflows, and be applied in AI-driven epidemiology. This Special Issue aims to fill these gaps by highlighting research that explores both technological advancements in AI and real-world implementation issues, such as model interpretability, bias, fairness, privacy, data security, and regulatory constraints. This Special Issue will also contribute to the body of literature by discussing the significance of explainable AI (XAI), federated learning, and AI-driven decision support systems in bridging the gap between research and clinical practice.

Dr. Zubaer Ibna Mannan
Dr. Asif Karim
Dr. Nur Alam Md
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence (AI)
  • healthcare informatics
  • bioinformatics
  • machine learning
  • deep neural network
  • medical image analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 3990 KB  
Article
Enhancing Brain Tumor Detection from MRI-Based Images Through Deep Transfer Learning Models
by Awad Bin Naeem, Biswaranjan Senapati and Abdelhamid Zaidi
AI 2025, 6(12), 305; https://doi.org/10.3390/ai6120305 - 26 Nov 2025
Viewed by 719
Abstract
Brain tumors are abnormal tissue growth characterized by uncontrolled and rapid cell proliferation. Early detection of brain tumors is critical for improving patient outcomes, and magnetic resonance imaging (MRI) has become the most widely used modality for diagnosis due to its superior image [...] Read more.
Brain tumors are abnormal tissue growth characterized by uncontrolled and rapid cell proliferation. Early detection of brain tumors is critical for improving patient outcomes, and magnetic resonance imaging (MRI) has become the most widely used modality for diagnosis due to its superior image quality and non-invasive nature. Deep learning, a subset of artificial intelligence, has revolutionized automated medical image analysis by enabling highly accurate and efficient classification tasks. The objective of this study is to develop a robust and effective brain tumor detection system using MRI images through transfer learning. A diagnostic framework is constructed based on convolutional neural networks (CNN), integrating both a custom sequential CNN model and pretrained architectures, namely VGG16 and EfficientNetB4, trained on the ImageNet dataset. Prior to model training, image preprocessing techniques are applied to enhance feature extraction and overall model performance. This research addresses the common challenge of limited MRI datasets by combining EfficientNetB4 with targeted preprocessing, data augmentation, and an appropriate optimizer selection strategy. The proposed methodology significantly reduces overfitting, improves classification accuracy on small datasets, and remains computationally efficient. Unlike previous studies that focus solely on CNN or VGG16 architectures, this work systematically compares multiple transfer learning models and demonstrates the superiority of EfficientNetB4. Experimental results on the Br35H dataset show that EfficientNetB4, combined with the ADAM optimizer, achieves outstanding performance with an accuracy of 99.66%, precision of 99.68%, and an F1-score of 100%. The findings confirm that integrating EfficientNetB4 with dataset-specific preprocessing and transfer learning provides a highly accurate and cost-effective solution for brain tumor classification, facilitating rapid and reliable medical diagnosis. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

23 pages, 8644 KB  
Article
Understanding What the Brain Sees: Semantic Recognition from EEG Responses to Visual Stimuli Using Transformer
by Ahmed Fares
AI 2025, 6(11), 288; https://doi.org/10.3390/ai6110288 - 7 Nov 2025
Viewed by 934
Abstract
Understanding how the human brain processes and interprets multimedia content represents a frontier challenge in neuroscience and artificial intelligence. This study introduces a novel approach to decode semantic information from electroencephalogram (EEG) signals recorded during visual stimulus perception. We present DCT-ViT, a spatial–temporal [...] Read more.
Understanding how the human brain processes and interprets multimedia content represents a frontier challenge in neuroscience and artificial intelligence. This study introduces a novel approach to decode semantic information from electroencephalogram (EEG) signals recorded during visual stimulus perception. We present DCT-ViT, a spatial–temporal transformer architecture that pioneers automated semantic recognition from brain activity patterns, advancing beyond conventional brain state classification to interpret higher level cognitive understanding. Our methodology addresses three fundamental innovations: First, we develop a topology-preserving 2D electrode mapping that, combined with temporal indexing, generates 3D spatial–temporal representations capturing both anatomical relationships and dynamic neural correlations. Second, we integrate discrete cosine transform (DCT) embeddings with standard patch and positional embeddings in the transformer architecture, enabling frequency-domain analysis that quantifies activation variability across spectral bands and enhances attention mechanisms. Third, we introduce the Semantics-EEG dataset comprising ten semantic categories extracted from visual stimuli, providing a benchmark for brain-perceived semantic recognition research. The proposed DCT-ViT model achieves 72.28% recognition accuracy on Semantics-EEG, substantially outperforming LSTM-based and attention-augmented recurrent baselines. Ablation studies demonstrate that DCT embeddings contribute meaningfully to model performance, validating their effectiveness in capturing frequency-specific neural signatures. Interpretability analyses reveal neurobiologically plausible attention patterns, with visual semantics activating occipital–parietal regions and abstract concepts engaging frontal–temporal networks, consistent with established cognitive neuroscience models. To address systematic misclassification between perceptually similar categories, we develop a hierarchical classification framework with boundary refinement mechanisms. This approach substantially reduces confusion between overlapping semantic categories, elevating overall accuracy to 76.15%. Robustness evaluations demonstrate superior noise resilience, effective cross-subject generalization, and few-shot transfer capabilities to novel categories. This work establishes the technical foundation for brain–computer interfaces capable of decoding semantic understanding, with implications for assistive technologies, cognitive assessment, and human–AI interaction. Both the Semantics-EEG dataset and DCT-ViT implementation are publicly released to facilitate reproducibility and advance research in neural semantic decoding. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

17 pages, 1775 KB  
Article
AI-Driven Analysis for Real-Time Detection of Unstained Microscopic Cell Culture Images
by Kathrin Hildebrand, Tatiana Mögele, Dennis Raith, Maria Kling, Anna Rubeck, Stefan Schiele, Eelco Meerdink, Avani Sapre, Jonas Bermeitinger, Martin Trepel and Rainer Claus
AI 2025, 6(10), 271; https://doi.org/10.3390/ai6100271 - 18 Oct 2025
Viewed by 1092
Abstract
Staining-based assays are widely used for cell analysis but are invasive, alter physiology, and prevent longitudinal monitoring. Label-free, morphology-based approaches could enable real-time, non-invasive drug testing, yet detection of subtle and dynamic changes has remained difficult. We developed a deep learning framework for [...] Read more.
Staining-based assays are widely used for cell analysis but are invasive, alter physiology, and prevent longitudinal monitoring. Label-free, morphology-based approaches could enable real-time, non-invasive drug testing, yet detection of subtle and dynamic changes has remained difficult. We developed a deep learning framework for stain-free monitoring of leukemia cell cultures using automated bright-field microscopy in a semi-automated culture system (AICE3, LABMaiTE, Augsburg, Germany). YOLOv8 models were trained on images from K562, HL-60, and Kasumi-1 cells, using an NVIDIA DGX A100 GPU for training and tested on GPU and CPU environments for real-time performance. Comparative benchmarking with RT-DETR and interpretability analyses using Eigen-CAM and radiomics (RedTell) was performed. YOLOv8 achieved high accuracy (mAP@0.5 > 98%, precision/sensitivity > 97%), with reproducibility confirmed on an independent dataset from a second laboratory and an AICE3 setup. The model distinguished between morphologically similar leukemia lines and reliably classified untreated versus differentiated K562 cells (hemin-induced erythroid and PMA-induced megakaryocytic; >95% accuracy). Incorporation of decitabine-treated cells demonstrated applicability to drug testing, revealing treatment-specific and intermediate phenotypes. Longitudinal monitoring captured culture- and time-dependent drift, enabling separation of temporal from drug-induced changes. Radiomics highlighted interpretable features such as size, elongation, and texture, but with lower accuracy than the deep learning approach. To our knowledge, this is the first demonstration that deep learning resolves subtle, drug-induced, and time-dependent morphological changes in unstained leukemia cells in real time. This approach provides a robust, accessible framework for label-free longitudinal drug testing and establishes a foundation for future autonomous, feedback-driven platforms in precision oncology. Ultimately, this approach may also contribute to more precise and adaptive clinical decision-making, advancing the field of personalized medicine. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

15 pages, 3254 KB  
Article
Rodent Social Behavior Recognition Using a Global Context-Aware Vision Transformer Network
by Muhammad Imran Sharif, Doina Caragea and Ahmed Iqbal
AI 2025, 6(10), 264; https://doi.org/10.3390/ai6100264 - 8 Oct 2025
Viewed by 1292
Abstract
Animal behavior recognition is an important research area that provides insights into areas such as neural functions, gene mutations, and drug efficacy, among others. The manual coding of behaviors based on video recordings is labor-intensive and prone to inconsistencies and human error. Machine [...] Read more.
Animal behavior recognition is an important research area that provides insights into areas such as neural functions, gene mutations, and drug efficacy, among others. The manual coding of behaviors based on video recordings is labor-intensive and prone to inconsistencies and human error. Machine learning approaches have been used to automate the analysis of animal behavior with promising results. Our work builds on existing developments in animal behavior analysis and state-of-the-art approaches in computer vision to identify rodent social behaviors. Specifically, our proposed approach, called Vision Transformer for Rat Social Interactions (ViT-RSI), leverages the existing Global Context Vision Transformer (GC-ViT) architecture to identify rat social interactions. Experimental results using five behaviors of the publicly available Rat Social Interaction (RatSI) dataset show that the ViT-RatSI approach can accurately identify rat social interaction behaviors. When compared with prior results from the literature, the ViT-RatSI approach achieves best results for four out of five behaviors, specifically for the “Approaching”, “Following”, “Moving away”, and “Solitary” behaviors, with F1 scores of 0.81, 0.81, 0.86, and 0.94, respectively. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

19 pages, 1858 KB  
Article
Color Space Comparison of Isolated Cervix Cells for Morphology Classification
by Irari Jiménez-López, José E. Valdez-Rodríguez and Marco A. Moreno-Armendáriz
AI 2025, 6(10), 261; https://doi.org/10.3390/ai6100261 - 7 Oct 2025
Viewed by 874
Abstract
Cervical cytology processing involves the morphological analysis of cervical cells to detect abnormalities. In recent years, machine learning and deep learning algorithms have been explored to automate this process. This study investigates the use of color space transformations as a preprocessing technique to [...] Read more.
Cervical cytology processing involves the morphological analysis of cervical cells to detect abnormalities. In recent years, machine learning and deep learning algorithms have been explored to automate this process. This study investigates the use of color space transformations as a preprocessing technique to reorganize visual information and improve classification performance using isolated cell images. Twelve color space transformations were compared, including RGB, CMYK, HSV, Grayscale, CIELAB, YUV, the individual RGB channels, and combinations of these channels (RG, RB, and GB). Two classification strategies were employed: binary classification (normal vs. abnormal) and five-class classification. The SIPaKMeD dataset was used, with images resized to 256×256 pixels via zero-padding. Data augmentation included random flipping and ±10° rotations applied with a 50% probability, followed by normalization. A custom CNN architecture was developed, comprising four convolutional layers followed by two fully connected layers and an output layer. The model achieved average precision, recall, and F1-score values of 91.39%, 91.34%, and 91.31% for the five-class case, respectively, and 99.69%, 96.68%, and 96.89% for the binary classification, respectively; these results were compared with a VGG-16 network. Furthermore, CMYK, HSV, and the RG channel combination consistently outperformed other color spaces, highlighting their potential to enhance classification accuracy. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

22 pages, 3435 KB  
Article
An Explainable AI Framework for Stroke Classification Based on CT Brain Images
by Serra Aksoy, Pinar Demircioglu and Ismail Bogrekci
AI 2025, 6(9), 202; https://doi.org/10.3390/ai6090202 - 25 Aug 2025
Viewed by 2130
Abstract
Stroke is a major global cause of death and disability and necessitates both quick diagnosis and treatment within narrow windows of opportunity. CT scanning is still the first-line imaging in the acute phase, but correct interpretation may not always be readily available and [...] Read more.
Stroke is a major global cause of death and disability and necessitates both quick diagnosis and treatment within narrow windows of opportunity. CT scanning is still the first-line imaging in the acute phase, but correct interpretation may not always be readily available and may not be resource-available in poor and rural health systems. Automated stroke classification systems can offer useful diagnostic assistance, but clinical application demands high accuracy and explainable decision-making to maintain physician trust and patient safety. In this paper, a ResNet-18 model was trained on 6653 CT brain scans (hemorrhagic stroke, ischemia, normal) with two-phase fine-tuning and transfer learning, XRAI explainability analysis, and web-based clinical decision support system integration. The model performed with 95% test accuracy with good performance across all classes. This system has great potential for emergency rooms and resource-poor environments, offering quick stroke evaluation when specialists are not available, particularly by rapidly excluding hemorrhagic stroke and assisting in the identification of ischemic stroke, which are critical steps in considering tissue plasminogen activator (tPA) administration within therapeutic windows in eligible patients. The combination of classification, explainability, and clinical interface offers a complete framework for medical AI implementation. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

20 pages, 351 KB  
Article
Multi-Level Depression Severity Detection with Deep Transformers and Enhanced Machine Learning Techniques
by Nisar Hussain, Amna Qasim, Gull Mehak, Muhammad Zain, Grigori Sidorov, Alexander Gelbukh and Olga Kolesnikova
AI 2025, 6(7), 157; https://doi.org/10.3390/ai6070157 - 15 Jul 2025
Cited by 3 | Viewed by 2517
Abstract
Depression is now one of the most common mental health concerns in the digital era, calling for powerful computational tools for its detection and its level of severity estimation. A multi-level depression severity detection framework in the Reddit social media network is proposed [...] Read more.
Depression is now one of the most common mental health concerns in the digital era, calling for powerful computational tools for its detection and its level of severity estimation. A multi-level depression severity detection framework in the Reddit social media network is proposed in this study, and posts are classified into four levels: minimum, mild, moderate, and severe. We take a dual approach using classical machine learning (ML) algorithms and recent Transformer-based architectures. For the ML track, we build ten classifiers, including Logistic Regression, SVM, Naive Bayes, Random Forest, XGBoost, Gradient Boosting, K-NN, Decision Tree, AdaBoost, and Extra Trees, with two recently proposed embedding methods, Word2Vec and GloVe embeddings, and we fine-tune them for mental health text classification. Of these, XGBoost yields the highest F1-score of 94.01 using GloVe embeddings. For the deep learning track, we fine-tune ten Transformer models, covering BERT, RoBERTa, XLM-RoBERTa, MentalBERT, BioBERT, RoBERTa-large, DistilBERT, DeBERTa, Longformer, and ALBERT. The highest performance was achieved by the MentalBERT model, with an F1-score of 97.31, followed by RoBERTa (96.27) and RoBERTa-large (96.14). Our results demonstrate that, to the best of the authors’ knowledge, domain-transferred Transformers outperform non-Transformer-based ML methods in capturing subtle linguistic cues indicative of different levels of depression, thereby highlighting their potential for fine-grained mental health monitoring in online settings. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

32 pages, 6788 KB  
Article
Knee Osteoarthritis Detection and Classification Using Autoencoders and Extreme Learning Machines
by Jarrar Amjad, Muhammad Zaheer Sajid, Ammar Amjad, Muhammad Fareed Hamid, Ayman Youssef and Muhammad Irfan Sharif
AI 2025, 6(7), 151; https://doi.org/10.3390/ai6070151 - 8 Jul 2025
Viewed by 2336
Abstract
Background/Objectives: Knee osteoarthritis (KOA) is a prevalent disorder affecting both older adults and younger individuals, leading to compromised joint function and mobility. Early and accurate detection is critical for effective intervention, as treatment options become increasingly limited as the disease progresses. Traditional diagnostic [...] Read more.
Background/Objectives: Knee osteoarthritis (KOA) is a prevalent disorder affecting both older adults and younger individuals, leading to compromised joint function and mobility. Early and accurate detection is critical for effective intervention, as treatment options become increasingly limited as the disease progresses. Traditional diagnostic methods rely heavily on the expertise of physicians and are susceptible to errors. The demand for utilizing deep learning models in order to automate and improve the accuracy of KOA image classification has been increasing. In this research, a unique deep learning model is presented that employs autoencoders as the primary mechanism for feature extraction, providing a robust solution for KOA classification. Methods: The proposed model differentiates between KOA-positive and KOA-negative images and categorizes the disease into its primary severity levels. Levels of severity range from “healthy knees” (0) to “severe KOA” (4). Symptoms range from typical joint structures to significant joint damage, such as bone spur growth, joint space narrowing, and bone deformation. Two experiments were conducted using different datasets to validate the efficacy of the proposed model. Results: The first experiment used the autoencoder for feature extraction and classification, which reported an accuracy of 96.68%. Another experiment using autoencoders for feature extraction and Extreme Learning Machines for actual classification resulted in an even higher accuracy value of 98.6%. To test the generalizability of the Knee-DNS system, we utilized the Butterfly iQ+ IoT device for image acquisition and Google Colab’s cloud computing services for data processing. Conclusions: This work represents a pioneering application of autoencoder-based deep learning models in the domain of KOA classification, achieving remarkable accuracy and robustness. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

Back to TopTop