Biomedical Imaging and Data Analytics for Disease Diagnosis and Treatment, 2nd Edition

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (30 April 2025) | Viewed by 11340

Special Issue Editors


E-Mail Website
Guest Editor
DICEAM Department, Mediterranea University of Reggio Calabria, Via Graziella Feo di Vito, 89060 Reggio Calabria, Italy
Interests: information theory; machine learning; deep learning; explainable machine learning; biomedical signal processing; brain computer interface; cybersecurity; computer vision; material informatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
Interests: computer-aided diagnosis; medical image processing; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The integration of biomedical imaging techniques and advanced data analytics has revolutionized the field of disease diagnosis and treatment, offering new insights and tools to improve patient outcomes. The timely and accurate diagnosis of diseases plays a crucial role in effective treatment planning and management. Biomedical imaging modalities such as MRI, CT, PET, ultrasound, and optical imaging provide valuable visual information about anatomical structures, physiological functions, and pathological changes within the human body. However, the sheer volume and complexity of imaging data present significant challenges in extracting meaningful information and making accurate diagnoses. This Special Issue aims to bring together researchers and practitioners from various disciplines to showcase the latest advancements in biomedical imaging and data analytics for disease diagnosis and treatment. We invite original research articles, reviews, and case studies that highlight innovative approaches, novel techniques, and practical applications in this field.

Topics of interest for this Special Issue include, but are not limited to, the following:

- Development of advanced imaging technologies for disease detection and characterization;

- Image reconstruction, enhancement, and segmentation techniques for accurate diagnosis;

- Integration of multimodal imaging for comprehensive disease assessment;

- Machine learning and deep learning algorithms for image analysis and pattern recognition;

- Quantitative imaging biomarkers for disease prognosis and treatment response assessment;

- Data-driven approaches for personalized medicine and precision healthcare.

Dr. Cosimo Ieracitano
Prof. Dr. Xuejun Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • biomedical engineering
  • medical image processing
  • data analytics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4770 KiB  
Article
A Radiomic Model for Gliomas Grade and Patient Survival Prediction
by Ahmad Chaddad, Pingyue Jia, Yan Hu, Yousef Katib, Reem Kateb and Tareef Sahal Daqqaq
Bioengineering 2025, 12(5), 450; https://doi.org/10.3390/bioengineering12050450 - 24 Apr 2025
Viewed by 419
Abstract
Brain tumors are among the most common malignant tumors of the central nervous system, with high mortality and recurrence rates. Radiomics extracts quantitative features from medical images, converting them into predictive biomarkers for tumor diagnosis, prognosis, and survival analysis. Despite the invasiveness and [...] Read more.
Brain tumors are among the most common malignant tumors of the central nervous system, with high mortality and recurrence rates. Radiomics extracts quantitative features from medical images, converting them into predictive biomarkers for tumor diagnosis, prognosis, and survival analysis. Despite the invasiveness and heterogeneity of brain tumors, even with timely treatment, the overall survival time or survival probability is not necessarily favorable. Therefore, accurate prediction of brain tumor grade and survival outcomes is important for personalized treatment. In this study, we propose a radiomic model for the non-invasive prediction of brain tumor grade and patient survival outcomes. We used four magnetic resonance imaging (MRI) sequences from 159 patients with glioma. Four classifiers were employed based on whether feature selection was applied. The features were derived from regions of interest identified and corrected either manually or automatically. The extreme gradient boosting (XGB) model with 3860 radiomic features achieved the highest classification performance, with an AUC of 98.20%, in distinguishing LGG from GBM images using manually corrected labels. Similarly, the Random Forest (RF) model exhibits the best discrimination between short-term and long-term survival groups with a p-value < 0.0003, a hazard ratio (HR) value of 3.24, and a 95% confidence interval (CI) of 1.63–4.43 based on the ICC features. The experimental findings demonstrate strong classification accuracy and effectively predict survival outcomes in glioma patients. Full article
Show Figures

Figure 1

17 pages, 41392 KiB  
Article
DermViT: Diagnosis-Guided Vision Transformer for Robust and Efficient Skin Lesion Classification
by Xuejun Zhang, Yehui Liu, Ganxin Ouyang, Wenkang Chen, Aobo Xu, Takeshi Hara, Xiangrong Zhou and Dongbo Wu
Bioengineering 2025, 12(4), 421; https://doi.org/10.3390/bioengineering12040421 - 16 Apr 2025
Viewed by 406
Abstract
Early diagnosis of skin cancer can significantly improve patient survival. Currently, skin lesion classification faces challenges such as lesion–background semantic entanglement, high intra-class variability, artifactual interference, and more, while existing classification models lack modeling of physicians’ diagnostic paradigms. To this end, we propose [...] Read more.
Early diagnosis of skin cancer can significantly improve patient survival. Currently, skin lesion classification faces challenges such as lesion–background semantic entanglement, high intra-class variability, artifactual interference, and more, while existing classification models lack modeling of physicians’ diagnostic paradigms. To this end, we propose DermViT, a medically driven deep learning architecture that addresses the above issues through a medically-inspired modular design. DermViT consists of three main modules: (1) Dermoscopic Context Pyramid (DCP), which mimics the multi-scale observation process of pathological diagnosis to adapt to the high intraclass variability of lesions such as melanoma, then extract stable and consistent data at different scales; (2) Dermoscopic Hierarchical Attention (DHA), which can reduce computational complexity while realizing intelligent focusing on lesion areas through a coarse screening–fine inspection mechanism; (3). Dermoscopic Feature Gate (DFG), which simulates the observation–verification operation of doctors through a convolutional gating mechanism and effectively suppresses semantic leakage of artifact regions. Our experimental results show that DermViT significantly outperforms existing methods in terms of classification accuracy (86.12%, a 7.8% improvement over ViT-Base) and number of parameters (40% less than ViT-Base) on the ISIC2018 and ISIC2019 datasets. Our visualization results further validate DermViT’s ability to locate lesions under interference conditions. By introducing a modular design that mimics a physician’s observation mode, DermViT achieves more logical feature extraction and decision-making processes for medical diagnosis, providing an efficient and reliable solution for dermoscopic image analysis. Full article
Show Figures

Graphical abstract

22 pages, 2872 KiB  
Article
Wavelet-Guided Multi-Scale ConvNeXt for Unsupervised Medical Image Registration
by Xuejun Zhang, Aobo Xu, Ganxin Ouyang, Zhengrong Xu, Shaofei Shen, Wenkang Chen, Mingxian Liang, Guiqi Zhang, Jiashun Wei, Xiangrong Zhou and Dongbo Wu
Bioengineering 2025, 12(4), 406; https://doi.org/10.3390/bioengineering12040406 - 11 Apr 2025
Viewed by 413
Abstract
Medical image registration is essential in clinical practices such as surgical navigation and image-guided diagnosis. The Transformer architecture of TransMorph demonstrates better accuracy in non-rigid registration tasks. However, its weaker spatial locality priors necessitate large-scale training datasets and a heavy number of parameters, [...] Read more.
Medical image registration is essential in clinical practices such as surgical navigation and image-guided diagnosis. The Transformer architecture of TransMorph demonstrates better accuracy in non-rigid registration tasks. However, its weaker spatial locality priors necessitate large-scale training datasets and a heavy number of parameters, which conflict with the limited annotated data and real-time demands of clinical workflows. Moreover, traditional downsampling and upsampling always degrade high-frequency anatomical features such as tissue boundaries or small lesions. We proposed WaveMorph, a wavelet-guided multi-scale ConvNeXt method for unsupervised medical image registration. A novel multi-scale wavelet feature fusion downsampling module is proposed by integrating the ConvNeXt architecture with Haar wavelet lossless decomposition to extract and fuse features from eight frequency sub-images using multi-scale convolution kernels. Additionally, a lightweight dynamic upsampling module is introduced in the decoder to reconstruct fine-grained anatomical structures. WaveMorph integrates the inductive bias of CNNs with the advantages of Transformers, effectively mitigating topological distortions caused by spatial information loss while supporting real-time inference. In both atlas-to-patient (IXI) and inter-patient (OASIS) registration tasks, WaveMorph demonstrates state-of-the-art performance, achieving Dice scores of 0.779 ± 0.015 and 0.824 ± 0.021, respectively, and real-time inference (0.072 s/image), validating the effectiveness of our model in medical image registration. Full article
Show Figures

Figure 1

27 pages, 3778 KiB  
Article
Patch-Based Texture Feature Extraction Towards Improved Clinical Task Performance
by Tao Lian, Chunyan Deng and Qianjin Feng
Bioengineering 2025, 12(4), 404; https://doi.org/10.3390/bioengineering12040404 - 10 Apr 2025
Viewed by 386
Abstract
Texture features can capture microstructural patterns and tissue heterogeneity, playing a pivotal role in medical image analysis. Compared to deep learning-based features, texture features offer superior interpretability in clinical applications. However, as conventional texture features focus strictly on voxel-level statistical information, they fail [...] Read more.
Texture features can capture microstructural patterns and tissue heterogeneity, playing a pivotal role in medical image analysis. Compared to deep learning-based features, texture features offer superior interpretability in clinical applications. However, as conventional texture features focus strictly on voxel-level statistical information, they fail to account for critical spatial heterogeneity between small tissue volumes, which may hold significant importance. To overcome this limitation, we propose novel 3D patch-based texture features and develop a radiomics analysis framework to validate the efficacy of our proposed features. Specifically, multi-scale 3D patches were created to construct patch patterns via k-means clustering. The multi-resolution images were discretized based on labels of the patterns, and then texture features were extracted to quantify the spatial heterogeneity between patches. Twenty-five cross-combination models of five feature selection methods and five classifiers were constructed. Our methodology was evaluated using two independent MRI datasets. Specifically, 145 breast cancer patients were included for axillary lymph node metastasis prediction, and 63 cervical cancer patients were enrolled for histological subtype prediction. Experimental results demonstrated that the proposed 3D patch-based texture features achieved an AUC of 0.76 in the breast cancer lymph node metastasis prediction task and an AUC of 0.94 in cervical cancer histological subtype prediction, outperforming conventional texture features (0.74 and 0.83, respectively). Our proposed features have successfully captured multi-scale patch-level texture representations, which could enhance the application of imaging biomarkers in the precise prediction of cancers and personalized therapeutic interventions. Full article
Show Figures

Figure 1

28 pages, 4033 KiB  
Article
Advancing Prostate Cancer Diagnostics: A ConvNeXt Approach to Multi-Class Classification in Underrepresented Populations
by Declan Ikechukwu Emegano, Mubarak Taiwo Mustapha, Ilker Ozsahin, Dilber Uzun Ozsahin and Berna Uzun
Bioengineering 2025, 12(4), 369; https://doi.org/10.3390/bioengineering12040369 - 1 Apr 2025
Viewed by 416
Abstract
Prostate cancer is a leading cause of cancer-related morbidity and mortality worldwide, with diagnostic challenges magnified in underrepresented regions like sub-Saharan Africa. This study introduces a novel application of ConvNeXt, an advanced convolutional neural network architecture, for multi-class classification of prostate histopathological images [...] Read more.
Prostate cancer is a leading cause of cancer-related morbidity and mortality worldwide, with diagnostic challenges magnified in underrepresented regions like sub-Saharan Africa. This study introduces a novel application of ConvNeXt, an advanced convolutional neural network architecture, for multi-class classification of prostate histopathological images into normal, benign, and malignant categories. The dataset, sourced from a tertiary healthcare institution in Nigeria, represents a typically underserved African population, addressing critical disparities in global diagnostic research. We also used the ProstateX dataset (2017) from The Cancer Imaging Archive (TCIA) to validate our result. A comprehensive pipeline was developed, leveraging advanced data augmentation, Grad-CAM for interpretability, and an ablation study to enhance model optimization and robustness. The ConvNeXt model achieved an accuracy of 98%, surpassing the performance of traditional CNNs (ResNet50, 93%; EfficientNet, 94%; DenseNet, 92%) and transformer-based models (ViT, 88%; CaiT, 86%; Swin Transformer, 95%; RegNet, 94%). Also, using the ProstateX dataset, the ConvNeXt model recorded 87.2%, 85.7%, 86.4%, and 0.92 as accuracy, recall, F1 score, and AUC, respectively, as validation results. Its hybrid architecture combines the strengths of CNNs and transformers, enabling superior feature extraction. Grad-CAM visualizations further enhance explainability, bridging the gap between computational predictions and clinical trust. Ablation studies demonstrated the contributions of data augmentation, optimizer selection, and learning rate tuning to model performance, highlighting its robustness and adaptability for deployment in low-resource settings. This study advances equitable health care by addressing the lack of regional representation in diagnostic datasets and employing a clinically aligned three-class classification approach. Combining high performance, interpretability, and scalability, this work establishes a foundation for future research on diverse and underrepresented populations, fostering global inclusivity in cancer diagnostics. Full article
Show Figures

Figure 1

14 pages, 4736 KiB  
Article
Development of Semi-Automated Image-Based Analysis Tool for CBCT Evaluation of Alveolar Ridge Changes After Tooth Extraction
by Anja Heselich, Joanna Śmieszek-Wilczewska, Louisa Boyo, Robert Sader and Shahram Ghanaati
Bioengineering 2025, 12(3), 307; https://doi.org/10.3390/bioengineering12030307 - 18 Mar 2025
Viewed by 413
Abstract
Following tooth extraction, the bone structure is prone to atrophic changes. Alveolar ridge resorption can compromise subsequent implant treatment not only at the extraction site itself but also by affecting the bone support of adjacent teeth. Various techniques, including the use of bone [...] Read more.
Following tooth extraction, the bone structure is prone to atrophic changes. Alveolar ridge resorption can compromise subsequent implant treatment not only at the extraction site itself but also by affecting the bone support of adjacent teeth. Various techniques, including the use of bone graft materials or autologous blood concentrates for ridge or socket preservation, aim to counteract this process. The efficacy of such methods can be evaluated non-invasively through radiological analysis of the treated region. However, existing radiological evaluation methods often focus only on isolated areas of the extraction socket, limiting their accuracy in assessing overall bone regeneration. This study introduces a novel, non-invasive, and semi-automated image-based analysis method that enables a more comprehensive evaluation of bone preservation using CBCT data. Developed with the open-source software “Fiji” (v2.15.0; based on ImageJ), the approach assesses bone changes at multiple horizontal and vertical positions, creating a near three-dimensional representation of the resorptive process. By analyzing the entire region around the extraction socket rather than selected regions, this method provides a more precise and reproducible assessment of alveolar ridge preservation. Although the approach requires some processing time and focuses exclusively on radiological evaluation, it offers greater accuracy than conventional methods. Its standardized and objective nature makes it a valuable tool for clinical research, facilitating more reliable comparisons of different socket preservation strategies. Full article
Show Figures

Figure 1

18 pages, 2964 KiB  
Article
Transcranial Direct Current Stimulation Can Modulate Brain Complexity and Connectivity in Children with Autism Spectrum Disorder: Insights from Entropy Analysis
by Jiannan Kang, Pengfei Hao, Haiyan Gu, Yukun Liu, Xiaoli Li and Xinling Geng
Bioengineering 2025, 12(3), 283; https://doi.org/10.3390/bioengineering12030283 - 12 Mar 2025
Viewed by 840
Abstract
The core characteristics of autism spectrum disorder (ASD) are atypical neurodevelopmental disorders. Transcranial direct current stimulation (tDCS), as a non-invasive brain stimulation technique, has been applied in the treatment of various neurodevelopmental disorders. Entropy analysis methods can quantitatively describe the complexity of EEG [...] Read more.
The core characteristics of autism spectrum disorder (ASD) are atypical neurodevelopmental disorders. Transcranial direct current stimulation (tDCS), as a non-invasive brain stimulation technique, has been applied in the treatment of various neurodevelopmental disorders. Entropy analysis methods can quantitatively describe the complexity of EEG signals and information transfer. This study recruited 24 children with ASD and 24 age- and gender-matched typically developing (TD) children, using multiple entropy methods to analyze differences in brain complexity and effective connectivity between the two groups. Furthermore, this study explored the regulatory effect of tDCS on brain complexity and effective connectivity in children with ASD. The results showed that children with ASD had lower brain complexity, with excessive effective connectivity in the δ, θ, and α frequency bands and insufficient effective connectivity in the β frequency band. After tDCS intervention, the brain complexity of children with ASD significantly increased, while effective connectivity in the δ and θ frequency bands significantly decreased. The results from behavioral-scale assessments also indicated positive behavioral changes. These findings suggest that tDCS may improve brain function in children with ASD by regulating brain complexity and effective connectivity, leading to behavioral improvements, and they provide new perspectives and directions for intervention research in ASD. Full article
Show Figures

Figure 1

18 pages, 4436 KiB  
Article
QRNet: A Quaternion-Based Retinex Framework for Enhanced Wireless Capsule Endoscopy Image Quality
by Vladimir Frants and Sos Agaian
Bioengineering 2025, 12(3), 239; https://doi.org/10.3390/bioengineering12030239 - 26 Feb 2025
Viewed by 478
Abstract
Wireless capsule endoscopy (WCE) offers a non-invasive diagnostic alternative for the gastrointestinal tract using a battery-powered capsule. Despite advantages, WCE encounters issues with video quality and diagnostic accuracy, often resulting in missing rates of 1–20%. These challenges stem from weak texture characteristics due [...] Read more.
Wireless capsule endoscopy (WCE) offers a non-invasive diagnostic alternative for the gastrointestinal tract using a battery-powered capsule. Despite advantages, WCE encounters issues with video quality and diagnostic accuracy, often resulting in missing rates of 1–20%. These challenges stem from weak texture characteristics due to non-Lambertian tissue reflections, uneven illumination, and the necessity of color fidelity. Traditional Retinex-based methods used for image enhancement are suboptimal for endoscopy, as they frequently compromise anatomical detail while distorting color. To address these limitations, we introduce QRNet, a novel quaternion-based Retinex framework. QRNet performs image decomposition into reflectance and illumination components within hypercomplex space, maintaining inter-channel relationships that preserve color fidelity. A quaternion wavelet attention mechanism refines essential features while suppressing noise, balancing enhancement and fidelity through an innovative loss function. Experiments on Kvasir-Capsule and Red Lesion Endoscopy datasets demonstrate notable improvements in metrics such as PSNR (+2.3 dB), SSIM (+0.089), and LPIPS (−0.126). Moreover, lesion segmentation accuracy increases by up to 5%, indicating the framework’s potential for improving early-stage lesion detection. Ablation studies highlight the quaternion representation’s pivotal role in maintaining color consistency, confirming the promise of this advanced approach for clinical settings. Full article
Show Figures

Figure 1

16 pages, 3098 KiB  
Article
MWG-UNet++: Hybrid Transformer U-Net Model for Brain Tumor Segmentation in MRI Scans
by Yu Lyu and Xiaolin Tian
Bioengineering 2025, 12(2), 140; https://doi.org/10.3390/bioengineering12020140 - 31 Jan 2025
Cited by 2 | Viewed by 1647
Abstract
The accurate segmentation of brain tumors from medical images is critical for diagnosis and treatment planning. However, traditional segmentation methods struggle with complex tumor shapes and inconsistent image quality which leads to suboptimal results. To address this challenge, we propose multiple tasking Wasserstein [...] Read more.
The accurate segmentation of brain tumors from medical images is critical for diagnosis and treatment planning. However, traditional segmentation methods struggle with complex tumor shapes and inconsistent image quality which leads to suboptimal results. To address this challenge, we propose multiple tasking Wasserstein Generative Adversarial Network U-shape Network++ (MWG-UNet++) to brain tumor segmentation by integrating a U-Net architecture enhanced with transformer layers which combined with Wasserstein Generative Adversarial Networks (WGAN) for data augmentation. The proposed model called Residual Attention U-shaped Network (RAUNet) for brain tumor segmentation leverages the robust feature extraction capabilities of U-Net and the global context awareness provided by transformers to improve segmentation accuracy. Incorporating WGAN for data augmentation addresses the challenge of limited medical imaging datasets to generate high-quality synthetic images that enhance model training and generalization. Our comprehensive evaluation demonstrates that this hybrid model significantly improves segmentation performance. The RAUNet outperforms compared approaches by capturing long-range dependencies and considering spatial variations. The use of WGANs augments the dataset for resulting in robust training and improved resilience to overfitting. The average evaluation metric for brain tumor segmentation is 0.8965 which outperformed the compared methods. Full article
Show Figures

Figure 1

17 pages, 3994 KiB  
Article
Can CT Image Reconstruction Parameters Impact the Predictive Value of Radiomics Features in Grading Pancreatic Neuroendocrine Neoplasms?
by Florent Tixier, Felipe Lopez-Ramirez, Alejandra Blanco, Mohammad Yasrab, Ammar A. Javed, Linda C. Chu, Elliot K. Fishman and Satomi Kawamoto
Bioengineering 2025, 12(1), 80; https://doi.org/10.3390/bioengineering12010080 - 16 Jan 2025
Cited by 1 | Viewed by 945
Abstract
The WHO grading of pancreatic neuroendocrine neoplasms (PanNENs) is essential in patient management and an independent prognostic factor for patient survival. Radiomics features from CE-CT images hold promise for the outcome and tumor grade prediction. However, variations in reconstruction parameters can impact the [...] Read more.
The WHO grading of pancreatic neuroendocrine neoplasms (PanNENs) is essential in patient management and an independent prognostic factor for patient survival. Radiomics features from CE-CT images hold promise for the outcome and tumor grade prediction. However, variations in reconstruction parameters can impact the predictive value of radiomics. 127 patients with histopathologically confirmed PanNENs underwent CT scans with filtered back projection (B20f) and iterative (I26f) reconstruction kernels. 3190 radiomic features were extracted from tumors and pancreatic volumes. Wilcoxon paired tests assessed the impact of reconstruction kernels and ComBat harmonization efficiency. SVM models were employed to predict tumor grade using the entire set of radiomics features or only those identified as harmonizable. The models’ performance was assessed on an independent dataset of 36 patients. Significant differences, after correction for multiple testing, were observed in 69% of features in the pancreatic volume and 51% in the tumor volume with B20f and I26f kernels. SVM models demonstrated accuracy ranging from 0.67 (95%CI: 0.50–0.81) to 0.83 (95%CI: 0.69–0.94) in distinguishing grade 1 cases from higher grades. Reconstruction kernels alter radiomics features and iterative kernel models trended towards higher performance. ComBat harmonization mitigates kernel impacts but addressing this effect is crucial in studies involving data from different kernels. Full article
Show Figures

Figure 1

24 pages, 4002 KiB  
Article
An Intelligent Approach for Early and Accurate Predication of Cardiac Disease Using Hybrid Artificial Intelligence Techniques
by Hazrat Bilal, Yibin Tian, Ahmad Ali, Yar Muhammad, Abid Yahya, Basem Abu Izneid and Inam Ullah
Bioengineering 2024, 11(12), 1290; https://doi.org/10.3390/bioengineering11121290 - 19 Dec 2024
Cited by 8 | Viewed by 1433
Abstract
This study proposes a new hybrid machine learning (ML) model for the early and accurate diagnosis of heart disease. The proposed model is a combination of two powerful ensemble ML models, namely ExtraTreeClassifier (ETC) and XGBoost (XGB), resulting in a hybrid model named [...] Read more.
This study proposes a new hybrid machine learning (ML) model for the early and accurate diagnosis of heart disease. The proposed model is a combination of two powerful ensemble ML models, namely ExtraTreeClassifier (ETC) and XGBoost (XGB), resulting in a hybrid model named ETCXGB. At first, all the features of the utilized heart disease dataset were given as input to the ETC model, which processed it by extracting the predicted probabilities and produced an output. The output of the ETC model was then added to the original feature space by producing an enriched feature matrix, which is then used as input for the XGB model. The new feature matrix is used for training the XGB model, which produces the final result that whether a person has cardiac disease or not, resulting in a high diagnosis accuracy for cardiac disease. In addition to the proposed model, three other hybrid DL models, such as convolutional neural network + recurrent neural network (CNN-RNN), convolutional neural network + long short-term memory (CNN-LSTM), and convolutional neural network + bidirectional long short-term memory (CNN-BLSTM), were also investigated. The proposed ETCXGB model improved the prediction accuracy by 3.91%, while CNN-RNN, CNN-LSTM, and CNN-BLSTM enhanced the prediction accuracy by 1.95%, 2.44%, and 2.45%, respectively, for the diagnosis of cardiac disease. The simulation outcomes illustrate that the proposed ETCXGB hybrid ML outperformed the classical ML and DL models in terms of all performance measures. Therefore, using the proposed hybrid ML model for the diagnosis of cardiac disease will help the medical practitioner make an accurate diagnosis of the disease and will help the healthcare society decrease the mortality rate caused by cardiac disease. Full article
Show Figures

Figure 1

15 pages, 1307 KiB  
Article
Coefficient-Shuffled Variable Block Compressed Sensing for Medical Image Compression in Telemedicine Systems
by R Monika, Samiappan Dhanalakshmi, Narayanamoorthi Rajamanickam, Amr Yousef and Roobaea Alroobaea
Bioengineering 2024, 11(11), 1101; https://doi.org/10.3390/bioengineering11111101 - 31 Oct 2024
Cited by 1 | Viewed by 1017
Abstract
Medical professionals primarily utilize medical images to detect anomalies within the interior structures and essential organs concealed by the skeletal and dermal layers. The primary purpose of medical imaging is to extract image features for the diagnosis of medical conditions. The processing of [...] Read more.
Medical professionals primarily utilize medical images to detect anomalies within the interior structures and essential organs concealed by the skeletal and dermal layers. The primary purpose of medical imaging is to extract image features for the diagnosis of medical conditions. The processing of these images is indispensable for evaluating a patient’s health. However, when monitoring patients over extended periods using specific medical imaging technologies, a substantial volume of data accumulates daily. Consequently, there arises a necessity to compress these data in order to remove duplicates and speed up the process of acquiring data, making it appropriate for effective analysis and transmission. Compressed Sensing (CS) has recently gained widespread acceptance for rapidly compressing images with a reduced number of samples. Ensuring high-quality image reconstruction using conventional CS and block-based CS (BCS) poses a significant challenge since they rely on randomly selected samples. This challenge can be surmounted by adopting a variable BCS approach that selectively samples from diverse regions within an image. In this context, this paper introduces a novel CS method that uses an energy matrix, namely coefficient shuffling variable BCS (CSEM-VBCS), tailored for compressing a variety of medical images with balanced sparsity, thereby achieving a substantial compression ratio and good reconstruction quality. The results of experimental evaluations underscore a remarkable enhancement in the performance metrics of the proposed method when compared to contemporary state-of-the-art techniques. Unlike other approaches, CSEM-VBCS uses coefficient shuffling to prioritize regions of interest, allowing for more effective compression without compromising image quality. This strategy is especially useful in telemedicine, where bandwidth constraints often limit the transmission of high-resolution medical images. By ensuring faster data acquisition and reduced redundancy, CSEM-VBCS significantly enhances the efficiency of remote patient monitoring and diagnosis. Full article
Show Figures

Figure 1

19 pages, 7663 KiB  
Article
Automatic Annotation Diagnostic Framework for Nasopharyngeal Carcinoma via Pathology–Fidelity GAN and Prior-Driven Classification
by Siqi Zeng, Xinwei Li, Yiqing Liu, Qiang Huang and Yonghong He
Bioengineering 2024, 11(7), 739; https://doi.org/10.3390/bioengineering11070739 - 22 Jul 2024
Cited by 1 | Viewed by 1652
Abstract
Non-keratinizing carcinoma is the most common subtype of nasopharyngeal carcinoma (NPC). Its poorly differentiated tumor cells and complex microenvironment present challenges to pathological diagnosis. AI-based pathological models have demonstrated potential in diagnosing NPC, but the reliance on costly manual annotation hinders development. To [...] Read more.
Non-keratinizing carcinoma is the most common subtype of nasopharyngeal carcinoma (NPC). Its poorly differentiated tumor cells and complex microenvironment present challenges to pathological diagnosis. AI-based pathological models have demonstrated potential in diagnosing NPC, but the reliance on costly manual annotation hinders development. To address the challenges, this paper proposes a deep learning-based framework for diagnosing NPC without manual annotation. The framework includes a novel unpaired generative network and a prior-driven image classification system. With pathology–fidelity constraints, the generative network achieves accurate digital staining from H&E to EBER images. The classification system leverages staining specificity and pathological prior knowledge to annotate training data automatically and to classify images for NPC diagnosis. This work used 232 cases for study. The experimental results show that the classification system reached a 99.59% accuracy in classifying EBER images, which closely matched the diagnostic results of pathologists. Utilizing PF-GAN as the backbone of the framework, the system attained a specificity of 0.8826 in generating EBER images, markedly outperforming that of other GANs (0.6137, 0.5815). Furthermore, the F1-Score of the framework for patch level diagnosis was 0.9143, exceeding those of fully supervised models (0.9103, 0.8777). To further validate its clinical efficacy, the framework was compared with experienced pathologists at the WSI level, showing comparable NPC diagnosis performance. This low-cost and precise diagnostic framework optimizes the early pathological diagnosis method for NPC and provides an innovative strategic direction for AI-based cancer diagnosis. Full article
Show Figures

Figure 1

Back to TopTop