Recent Advances in Machine Learning and Explainable Artificial Intelligence in Biomedical Data Mining, and Disease Diagnosis Frameworks—2nd Edition

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 July 2026 | Viewed by 2502

Special Issue Editor


E-Mail Website
Guest Editor
Department of Artificial Intelligent and Robotics, Sejong University, Seoul 05006, Republic of Korea
Interests: biomedical signal/image processing; computer-aided diagnostic; brain imaging; brain–computer interface; machine learning; artificial intelligence; EEG; fNIRS
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Following the successful launch of the first Special Issue, titled ‘Recent Advances in Machine Learning and Explainable Artificial Intelligence in Biomedical Data Mining and Disease Diagnosis Frameworks,’ this Second Issue continues to highlight cutting-edge research at the intersection of biomedical data science, machine learning, and explainable AI.

The rapid evolution of artificial intelligence, data analytics, and technology has created new avenues for personalized healthcare approaches. This Special Issue focuses on the most recent advances in machine learning (ML) and explainable artificial intelligence (XAI) for biomedical data mining and disease diagnostic frameworks. This Special Issue delves into the application of advanced ML methods like deep learning and ensemble learning for analyzing intricate biomedical data sets, particularly focusing on disease diagnosis and prognosis. Another central theme of the Special Issue is the importance of explainable AI in healthcare applications. XAI techniques aim to make the decision-making process of AI systems more transparent and understandable. The potential topics include, but are not limited to, the following: supervised and unsupervised learning, deep learning, XAI in healthcare, system modelling and system design, confidentiality and privacy of health data, biometrics, digital technologies, data mining, computer-aided diagnosis, brain–computer interfaces, etc. This Special Issue aims to bring together original research and review papers on current breakthroughs in MI and XAI in healthcare.

Prof. Dr. Amad Zafar
Guest Editor

Dr. Sara Tehsin
Guest Editor Assistant
Email: 
Centre of Real Time Computer Systems, Kaunas University of Technology, 51368 Kaunas, Lithuania
Interests: machine learning; image forensics; AI explainability; autonomous systems

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • EEG
  • fNIRS
  • MRI
  • X-rays
  • biomedical signal and image processing
  • machine learning
  • explainable artificial intelligence
  • biomedical data mining
  • computer-aided diagnosis
  • brain–computer interfaces
  • healthcare

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 32880 KB  
Article
XAI-MedNet: A Next-Generation Explainable AI Framework for Contrast-Enhanced Skin Lesion Classification via Entropy-Controlled Optimization
by Abdulrahman Alabduljabbar, Tallha Akram, Youssef N. Altherwy, Muhammad Adeel Akram and Imran Ashraf
Bioengineering 2026, 13(5), 506; https://doi.org/10.3390/bioengineering13050506 - 27 Apr 2026
Viewed by 507
Abstract
Explainable Artificial Intelligence (XAI) has become a critical requirement in medical image analysis, where transparency and interpretability are essential for clinical trust and decision support. Melanoma is recognized as one of the most deadly types of skin cancer, with its occurrence exhibiting an [...] Read more.
Explainable Artificial Intelligence (XAI) has become a critical requirement in medical image analysis, where transparency and interpretability are essential for clinical trust and decision support. Melanoma is recognized as one of the most deadly types of skin cancer, with its occurrence exhibiting an increasing pattern in recent times. However, detecting this cancer in its initial stages greatly increases patients’ chances of long-term survival. Various computer-based techniques have recently been proposed to diagnose skin lesions at their early stages. Even though the machine learning community has achieved a certain degree of success, there is still an unresolved research challenge regarding high error margins and the limited interpretability of automated systems. This study focuses on addressing both segmentation and classification tasks, with particular emphasis on two key concepts: (1) improving image quality to maximize distinguishability between foreground and background regions, thereby enhancing visual interpretability and segmentation accuracy and (2) eliminating redundant and cluttered feature information to generate the most discriminative and compact feature representations. The input images are initially processed using a novel metaheuristic contrast-stretching method to estimate image-specific key parameters, thereby enhancing lesion boundary clarity in a clinically interpretable manner. Following this, the improved images are fed into selected pre-trained deep models, including DenseNet-201, Inception-ResNet v2, and NASNet-Mobile. The extracted features from all pre-trained models are fused to produce resultant vectors, which are then refined using a bio-inspired feature selection method, termed entropy-controlled whale optimization, to retain only the most informative attributes. The selected discriminative feature set is subsequently classified using multiple classifiers. The results indicate that the proposed framework achieves superior performance compared to existing methods in terms of accuracy, sensitivity, specificity, and F1-score. Additionally, it facilitates a more explainable, transparent, and structured diagnostic pipeline appropriate for medical applications. Full article
16 pages, 1470 KB  
Article
Physics-Guided Deep Learning for Interpretable Biomedical Image Reconstruction and Pattern Recognition in Diagnostic Frameworks
by Akeel Qadir, Saad Arif, Prajoona Valsalan and Osama Khan
Bioengineering 2026, 13(4), 457; https://doi.org/10.3390/bioengineering13040457 - 13 Apr 2026
Viewed by 458
Abstract
This study introduces a physics-guided deep learning architecture designed for the simulation, reconstruction, and pattern recognition of biomedical images. By explicitly integrating physical priors into the learning model, the framework addresses the black-box nature of traditional artificial intelligence (AI). It provides an explainable [...] Read more.
This study introduces a physics-guided deep learning architecture designed for the simulation, reconstruction, and pattern recognition of biomedical images. By explicitly integrating physical priors into the learning model, the framework addresses the black-box nature of traditional artificial intelligence (AI). It provides an explainable AI pathway that enhances diagnostic accuracy, robustness, and clinical interpretation. The proposed framework was evaluated through systematic simulation studies. It involved complex geometric configurations, multimodal physical fields, and noise-corrupted synthetic three-dimensional brain volumes. Quantitative analysis demonstrates consistent improvements in reconstruction fidelity, with the peak signal-to-noise ratio (PSNR) reaching 47 dB and the structural similarity index exceeding 0.90 across all scenarios. Notably, at moderate noise levels (0.05), the framework maintains a PSNR greater than 32 dB, ensuring structural integrity essential for computer-aided diagnosis. Volumetric brain experiments further reveal a 38–44% reduction in activation localization errors, highlighting the framework’s utility in functional imaging and disease prognosis. By grounding deep learning in physical constraints, this study provides a transparent and robust solution for automated disease classification and advanced biomedical imaging tasks within clinical decision support systems. Full article
Show Figures

Figure 1

31 pages, 3515 KB  
Article
Improving Deep Learning Based Lung Nodule Classification Through Optimized Adaptive Intensity Correction
by Saba Khan, Muhammad Nouman Noor, Haya Mesfer Alshahrani, Wided Bouchelligua and Imran Ashraf
Bioengineering 2026, 13(4), 396; https://doi.org/10.3390/bioengineering13040396 - 29 Mar 2026
Viewed by 525
Abstract
Lung cancer is one of the most common causes of death from cancer around the world, and catching it early through computed tomography (CT) scans can drastically improve survival. However, automated classification of pulmonary nodule candidates is hard because images do not all [...] Read more.
Lung cancer is one of the most common causes of death from cancer around the world, and catching it early through computed tomography (CT) scans can drastically improve survival. However, automated classification of pulmonary nodule candidates is hard because images do not all have the same intensity across scanners and protocols, resulting in inconsistent performance, more false positives (FP), and a ceiling on how much deep learning models work in an average clinic. In this work, we tackle this by introducing a preprocessing step that corrects intensity differences before feeding images into classification models. We use Contrast-Limited Adaptive Histogram Equalization (CLAHE), but with its key parameters tuned automatically via a modified version of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). This helps to boost local contrast adaptively, keeps important anatomical details intact, and cuts down on noise. We tested the approach on the public LUNA16 dataset, first checking image quality (Peak Signal-to-Noise Ratio (PSNR) around 53 dB and Structural Similarity Index (SSIM) of 0.9, better than standard methods), then training three popular deep models—namely, ResNet-50, EfficientNet-B0, and InceptionV3—with CutMix augmentation for better generalization. On the enhanced images, ResNet-50 achieved up to 99.0% classification accuracy with substantially less FP than when using the raw scans. Taken together, these results demonstrate that intelligent and optimized preprocessing can effectively mitigate intensity variations via deep learning for lung nodule detection, thus coming closer to realizing the practical toolbox of computer-aided diagnosis in routine clinical practice. Full article
Show Figures

Figure 1

39 pages, 17119 KB  
Article
Transformer-Based Deep Learning for Population-Scale Retinal Image Screening of Ophthalmic Disorders
by Wiem Abdelbaki, Wided Bouchelligua, Inzamam Mashood Nasir, Sara Tehsin and Hend Alshaya
Bioengineering 2026, 13(4), 377; https://doi.org/10.3390/bioengineering13040377 - 25 Mar 2026
Viewed by 577
Abstract
To perform screening of the retina on a population scale, an automated procedure is required that incorporates accurate, reproducible, interpretable, and computationally costeffective models. Existing approaches using convolutional or transformer architectures typically do not adequately represent both fine-grained pathology and large-scale retinal context [...] Read more.
To perform screening of the retina on a population scale, an automated procedure is required that incorporates accurate, reproducible, interpretable, and computationally costeffective models. Existing approaches using convolutional or transformer architectures typically do not adequately represent both fine-grained pathology and large-scale retinal context simultaneously, which could adversely affect their reliability if used for large-scale applications in clinical practice. In this paper, we propose a hierarchical transformer-based screening framework for retinal fundus images that incorporates patch-based tokenization, global transformer encoding, and hierarchical aggregation of contextual information. We also developed a lightweight prediction head that supports screening for both single and multiple diseases. The framework has been evaluated using standard screening metrics, robustness, and cross-dataset generalization analyses on two eye retinopathy image databases: EyePACS and RFMiD. With regard to screening for a binary outcome of diabetic retinopathy, our method provided an accuracy of 89.4% and an area under the receiver operating characteristic (AUROC) curve of 93.6% on EyePACS and attained an accuracy of 95.2% and a macro-averaged F1 score of 82.7% on RFMiD. Our hierarchical transformer achieved improved robustness to degraded images and increased generalizability across datasets compared with all current state-of-the-art models. The proposed hierarchical transformer demonstrates strong potential for large-scale retinal screening and provides a promising foundation for future clinically validated deployment. Full article
Show Figures

Figure 1

Back to TopTop