Advances in Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging—2nd Edition

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 31 October 2025 | Viewed by 18909

Special Issue Editor


E-Mail Website
Guest Editor
Department of Pathology and Clinical Bioinformatics, Erasmus Medical Center, 3015 GD Rotterdam, The Netherlands
Interests: deep learning; radiomics; histopathology; medical image analysis; image segmentation; image classification; CAD systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cancer ranks the second most common cause of death in many countries, following cardiovascular diseases [1]. Therefore, early detection and diagnosis are crucial for improving the 5-year survival rate [2]. Screening examination plays an essential role in diagnosing diseases [3], requiring physicians to interpret many medical images. However, human interpretation has many limitations, including inaccuracy, distraction, and fatigue, which may lead to false positives and false negatives that lead to improper treatment. Therefore, a computer-aided diagnosis (CAD) system is needed as a second opinion system to diagnose ambiguous cases to solve these limitations. Computer-aided diagnostic (CAD) systems use classical image processing, computer vision, machine learning, and deep learning methods for image analysis. Using image classification or segmentation algorithms, they find a region of interest (ROI) pointing to a specific location within the given image or the outcome of interest in the form of a label pointing to a diagnosis or prognosis. This Special Issue focuses on advanced CAD methods that use artificial intelligence (AI) approaches in various imaging modalities, such as X-ray, computed tomography (CT), positron emission tomography (PET), ultrasound, MRI, immunohistochemistry, and hematoxylin and eosin (H&E) whole slide images (WSIs), toward the end diagnosis or prognosis.

[1] Huang, X.; Xiao, R.; Pan, S.; Yang, X.; Yuan, W.; Tu, Z.; et al. Uncovering the roles of long non-coding RNAS in cancer stem cells. J. Hematol. Oncol. 2017, 10, 62. doi: 10.1186/s13045-017-0428-9.

[2] Mohaghegh, P.; Rockall, A.G. Imaging strategy for early ovarian cancer: Characterization of adnexal masses with conventional and advanced imaging techniques. Radiographics 2012, 32, 1751–1773.

[3] Sarigoz, T.; Ertan, T.; Topuz, O.; Sevim, Y.; Cihan, Y. Role of digital infrared thermal imaging in the diagnosis of breast mass: A pilot study. Infrared Phys. Technol. 2018, 91, 214–219.

Dr. Farhan Akram
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cancer diagnosis
  • medical images
  • electronic health records
  • machine learning
  • deep learning
  • artificial intelligence
  • explainable AI models
  • multi-modal analysis
  • federated learning
  • CAD systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 4768 KiB  
Article
Deep Learning with Transfer Learning on Digital Breast Tomosynthesis: A Radiomics-Based Model for Predicting Breast Cancer Risk
by Francesca Galati, Roberto Maroncelli, Chiara De Nardo, Lucia Testa, Gloria Barcaroli, Veronica Rizzo, Giuliana Moffa and Federica Pediconi
Diagnostics 2025, 15(13), 1631; https://doi.org/10.3390/diagnostics15131631 - 26 Jun 2025
Viewed by 131
Abstract
Background: Digital breast tomosynthesis (DBT) is a valuable imaging modality for breast cancer detection; however, its interpretation remains time-consuming and subject to inter-reader variability. This study aimed to develop and evaluate two deep learning (DL) models based on transfer learning for the [...] Read more.
Background: Digital breast tomosynthesis (DBT) is a valuable imaging modality for breast cancer detection; however, its interpretation remains time-consuming and subject to inter-reader variability. This study aimed to develop and evaluate two deep learning (DL) models based on transfer learning for the binary classification of breast lesions (benign vs. malignant) using DBT images to support clinical decision-making and risk stratification. Methods: In this retrospective monocentric study, 184 patients with histologically or clinically confirmed benign (107 cases, 41.8%) or malignant (77 cases, 58.2%) breast lesions were included. Each case underwent DBT with a single lesion manually segmented for radiomic analysis. Two convolutional neural network (CNN) architectures—ResNet50 and DenseNet201—were trained using transfer learning from ImageNet weights. A 10-fold cross-validation strategy with ensemble voting was applied. Model performance was evaluated through ROC–AUC, accuracy, sensitivity, specificity, PPV, and NPV. Results: The ResNet50 model outperformed DenseNet201 across most metrics. On the internal testing set, ResNet50 achieved a ROC–AUC of 63%, accuracy of 60%, sensitivity of 39%, and specificity of 75%. The DenseNet201 model yielded a lower ROC–AUC of 55%, accuracy of 55%, and sensitivity of 24%. Both models demonstrated relatively high specificity, indicating potential utility in ruling out malignancy, though sensitivity remained suboptimal. Conclusions: This study demonstrates the feasibility of using transfer learning-based DL models for lesion classification on DBT. While the overall performance was moderate, the results highlight both the potential and current limitations of AI in breast imaging. Further studies and approaches are warranted to enhance model robustness and clinical applicability. Full article
Show Figures

Figure 1

18 pages, 7107 KiB  
Article
Scalable Nuclei Detection in HER2-SISH Whole Slide Images via Fine-Tuned Stardist with Expert-Annotated Regions of Interest
by Zaka Ur Rehman, Mohammad Faizal Ahmad Fauzi, Wan Siti Halimatul Munirah Wan Ahmad, Fazly Salleh Abas, Phaik-Leng Cheah, Seow-Fan Chiew and Lai-Meng Looi
Diagnostics 2025, 15(13), 1584; https://doi.org/10.3390/diagnostics15131584 - 22 Jun 2025
Viewed by 234
Abstract
Background: Breast cancer remains a critical health concern worldwide, with histopathological analysis of tissue biopsies serving as the clinical gold standard for diagnosis. Manual evaluation of histopathology images is time-intensive and requires specialized expertise, often resulting in variability in diagnostic outcomes. In silver [...] Read more.
Background: Breast cancer remains a critical health concern worldwide, with histopathological analysis of tissue biopsies serving as the clinical gold standard for diagnosis. Manual evaluation of histopathology images is time-intensive and requires specialized expertise, often resulting in variability in diagnostic outcomes. In silver in situ hybridization (SISH) images, accurate nuclei detection is essential for precise histo-scoring of HER2 gene expression, directly impacting treatment decisions. Methods: This study presents a scalable and automated deep learning framework for nuclei detection in HER2-SISH whole slide images (WSIs), utilizing a novel dataset of 100 expert-marked regions extracted from 20 WSIs collected at the University of Malaya Medical Center (UMMC). The proposed two-stage approach combines a pretrained Stardist model with image processing-based annotations, followed by fine tuning on our domain-specific dataset to improve generalization. Results: The fine-tuned model achieved substantial improvements over both the pretrained Stardist model and a conventional watershed segmentation baseline. Quantitatively, the proposed method attained an average F1-score of 98.1% for visual assessments and 97.4% for expert-marked nuclei, outperforming baseline methods across all metrics. Additionally, training and validation performance curves demonstrate stable model convergence over 100 epochs. Conclusions: These results highlight the robustness of our approach in handling the complex morphological characteristics of SISH-stained nuclei. Our framework supports pathologists by offering reliable, automated nuclei detection in HER2 scoring workflows, contributing to diagnostic consistency and efficiency in clinical pathology. Full article
Show Figures

Figure 1

11 pages, 2749 KiB  
Article
The Validation of an Artificial Intelligence-Based Software for the Detection and Numbering of Primary Teeth on Panoramic Radiographs
by Heba H. Bakhsh, Dur Alomair, Nada Ahmed AlShehri, Alia U. Alturki, Eman Allam and Sara M. ElKhateeb
Diagnostics 2025, 15(12), 1489; https://doi.org/10.3390/diagnostics15121489 - 11 Jun 2025
Viewed by 327
Abstract
Background: Dental radiographs play a crucial role in diagnosis and treatment planning. With the rise in digital imaging, there is growing interest in leveraging artificial intelligence (AI) to support clinical decision-making. AI technologies can enhance diagnostic accuracy by automating tasks like identifying [...] Read more.
Background: Dental radiographs play a crucial role in diagnosis and treatment planning. With the rise in digital imaging, there is growing interest in leveraging artificial intelligence (AI) to support clinical decision-making. AI technologies can enhance diagnostic accuracy by automating tasks like identifying and locating dental structures. The aim of the current study was to assess and validate the accuracy of an AI-powered application in the detection and numbering of primary teeth on panoramic radiographs. Methods: This study examined 598 archived panoramic radiographs of subjects aged 4–14 years old. Images with poor diagnostic quality were excluded. Three experienced clinicians independently assessed each image to establish the ground truth for primary teeth identification. The same radiographs were then evaluated using EM2AI, an AI-based diagnostic software for the automatic detection and numbering of primary teeth. The AI’s performance was assessed by comparing its output to the ground truth using sensitivity, specificity, predictive values, accuracy, and the Kappa coefficient. Results: EM2AI demonstrated high overall performance in detecting and numbering primary teeth in mixed dentition, with an accuracy of 0.98, a sensitivity of 0.97, a specificity of 0.99, and a Kappa coefficient of 0.96. Detection accuracy for individual teeth ranged from 0.96 to 0.99. The highest sensitivity (0.99) was observed in detecting upper right canines and primary molars, while the lowest sensitivity (0.79–0.85) occurred in detecting lower incisors and the upper left first molar. Conclusions: The AI module demonstrated high accuracy in the automatic detection of primary teeth presence and numbering in panoramic images, with performance metrics exceeding 90%. With further validation, such systems could support automated dental charting, improve electronic dental records, and aid clinical decision-making. Full article
Show Figures

Figure 1

22 pages, 5910 KiB  
Article
Diabetic Foot Ulcers Detection Model Using a Hybrid Convolutional Neural Networks–Vision Transformers
by Abdul Rahaman Wahab Sait and Ramprasad Nagaraj
Diagnostics 2025, 15(6), 736; https://doi.org/10.3390/diagnostics15060736 - 15 Mar 2025
Cited by 1 | Viewed by 844
Abstract
Background: Diabetic foot ulcers (DFUs) are severe and common complications of diabetes. Early and accurate DFUs classification is essential for effective treatment and prevention of severe complications. The existing DFUs classification methods have certain limitations, including limited performance, poor generalization, and lack of [...] Read more.
Background: Diabetic foot ulcers (DFUs) are severe and common complications of diabetes. Early and accurate DFUs classification is essential for effective treatment and prevention of severe complications. The existing DFUs classification methods have certain limitations, including limited performance, poor generalization, and lack of interpretability, restricting their use in clinical settings. Objectives: To overcome these limitations, this study proposes an innovative model to achieve robust and interpretable DFUs classification. Methodology: The proposed DFUs classification integrates MobileNet V3-SWIN, LeViT-Peformer, Tensor-based feature fusion, and ensemble splines-based Kolmogorov–Arnold Networks (KANs) with Shapley Additive exPlanations (SHAP) values to classify DFUs severities into ischemia and infection classes. In order to train and generalize the proposed model, the authors utilized the DFUs challenge (DFUC) 2021 and 2020 datasets. Findings: The proposed model achieved state-of-the-art performance, outperforming the existing approaches by obtaining an average accuracy of 98.7%, precision of 97.3%, recall of 97.4%, and F1-score of 97.3% on DFUC 2021. On DFUC 2020, it maintained a robust generalization accuracy of 96.9%, demonstrating superiority over standalone and baseline models. The study findings have significant implications for research and clinical practice. The findings offer an effective platform for scalable and explainable automated DFUs treatment and management, improving patient outcomes and clinical practices. Full article
Show Figures

Figure 1

25 pages, 6611 KiB  
Article
Leveraging Attention-Based Deep Learning in Binary Classification for Early-Stage Breast Cancer Diagnosis
by Lama A. Aldakhil, Shuaa S. Alharbi, Abdulrahman Aloraini and Haifa F. Alhasson
Diagnostics 2025, 15(6), 718; https://doi.org/10.3390/diagnostics15060718 - 13 Mar 2025
Viewed by 872
Abstract
Background: Breast cancer diagnosis is a global health challenge, requiring innovative methods to improve early detection accuracy and efficiency. This study investigates the integration of attention-based deep learning models with traditional machine learning (ML) methods to classify histopathological breast cancer images. Specifically, the [...] Read more.
Background: Breast cancer diagnosis is a global health challenge, requiring innovative methods to improve early detection accuracy and efficiency. This study investigates the integration of attention-based deep learning models with traditional machine learning (ML) methods to classify histopathological breast cancer images. Specifically, the Efficient Channel-Spatial Attention Network (ECSAnet) is utilized, optimized for binary classification by leveraging advanced attention mechanisms to enhance feature extraction across spatial and channel dimensions. Methods: Experiments were conducted using the BreakHis dataset, which includes histopathological images of breast tumors categorized as benign or malignant across four magnification levels: 40×, 100×, 200×, and 400×. ECSAnet was evaluated independently and in combination with traditional ML models, such as Decision Trees and Logistic Regression. The study also analyzed the impact of magnification levels on classification accuracy, robustness, and generalization. Results: Lower magnification levels consistently outperformed higher magnifications in terms of accuracy, robustness, and generalization, particularly for binary classification tasks. Additionally, combining ECSAnet with traditional ML models improved classification performance, especially at lower magnifications. These findings highlight the diagnostic strengths of attention-based models and the importance of aligning magnification levels with diagnostic objectives. Conclusions: This study demonstrates the potential of attention-based deep learning models, such as ECSAnet, to improve breast cancer diagnostics when integrated with traditional ML methods. The findings emphasize the diagnostic utility of lower magnifications and provide a foundation for future research into hybrid architectures and multimodal approaches to further enhance breast cancer diagnosis. Full article
Show Figures

Figure 1

18 pages, 1575 KiB  
Article
MammoViT: A Custom Vision Transformer Architecture for Accurate BIRADS Classification in Mammogram Analysis
by Abdullah G. M. Al Mansour, Faisal Alshomrani, Abdullah Alfahaid and Abdulaziz T. M. Almutairi
Diagnostics 2025, 15(3), 285; https://doi.org/10.3390/diagnostics15030285 - 25 Jan 2025
Cited by 1 | Viewed by 1804
Abstract
Background: Breast cancer screening through mammography interpretation is crucial for early detection and improved patient outcomes. However, the manual classification of mammograms using the BIRADS (Breast Imaging-Reporting and Data System) remains challenging due to subtle imaging features, inter-reader variability, and increasing radiologist workload. [...] Read more.
Background: Breast cancer screening through mammography interpretation is crucial for early detection and improved patient outcomes. However, the manual classification of mammograms using the BIRADS (Breast Imaging-Reporting and Data System) remains challenging due to subtle imaging features, inter-reader variability, and increasing radiologist workload. Traditional computer-aided detection systems often struggle with complex feature extraction and contextual understanding of mammographic abnormalities. To address these limitations, this study proposes MammoViT, a novel hybrid deep learning framework that leverages both ResNet50’s hierarchical feature extraction capabilities and Vision Transformer’s ability to capture long-range dependencies in images. Methods: We implemented a multi-stage approach utilizing a pre-trained ResNet50 model for initial feature extraction from mammogram images. To address the significant class imbalance in our four-class BIRADS dataset, we applied SMOTE (Synthetic Minority Over-sampling Technique) to generate synthetic samples for minority classes. The extracted feature arrays were transformed into non-overlapping patches with positional encodings for Vision Transformer processing. The Vision Transformer employs multi-head self-attention mechanisms to capture both local and global relationships between image patches, with each attention head learning different aspects of spatial dependencies. The model was optimized using Keras Tuner and trained using 5-fold cross-validation with early stopping to prevent overfitting. Results: MammoViT achieved 97.4% accuracy in classifying mammogram images across different BIRADS categories. The model’s effectiveness was validated through comprehensive evaluation metrics, including a classification report, confusion matrix, probability distribution, and comparison with existing studies. Conclusions: MammoViT effectively combines ResNet50 and Vision Transformer architectures while addressing the challenge of imbalanced medical imaging datasets. The high accuracy and robust performance demonstrate its potential as a reliable tool for supporting clinical decision-making in breast cancer screening. Full article
Show Figures

Figure 1

36 pages, 8015 KiB  
Article
A Robust Tuberculosis Diagnosis Using Chest X-Rays Based on a Hybrid Vision Transformer and Principal Component Analysis
by Sameh Abd El-Ghany, Mohammed Elmogy, Mahmood A. Mahmood and A. A. Abd El-Aziz
Diagnostics 2024, 14(23), 2736; https://doi.org/10.3390/diagnostics14232736 - 5 Dec 2024
Cited by 1 | Viewed by 2570
Abstract
Background: Tuberculosis (TB) is a bacterial disease that mainly affects the lungs, but it can also impact other parts of the body, such as the brain, bones, and kidneys. The disease is caused by a bacterium called Mycobacterium tuberculosis and spreads through [...] Read more.
Background: Tuberculosis (TB) is a bacterial disease that mainly affects the lungs, but it can also impact other parts of the body, such as the brain, bones, and kidneys. The disease is caused by a bacterium called Mycobacterium tuberculosis and spreads through the air when an infected person coughs or sneezes. TB can be inactive or active; in its active state, noticeable symptoms appear, and it can be transmitted to others. There are ongoing challenges in fighting TB, including resistance to medications, co-infections, and limited resources in areas heavily affected by the disease. These issues make it challenging to eradicate TB. Objective: Timely and precise diagnosis is essential for effective control, especially since TB often goes undetected and untreated, particularly in remote and under-resourced locations. Chest X-ray (CXR) images are commonly used to diagnose TB. However, difficulties can arise due to unusual findings on X-rays and a shortage of radiologists in high-infection areas. Method: To address these challenges, a computer-aided diagnosis (CAD) system that uses the vision transformer (ViT) technique has been developed to accurately identify TB in CXR images. This innovative hybrid CAD approach combines ViT with Principal Component Analysis (PCA) and machine learning (ML) techniques for TB classification, introducing a new method in this field. In the hybrid CAD system, ViT is used for deep feature extraction as a base model, PCA is used to reduce feature dimensions, and various ML methods are used to classify TB. This system allows for quickly identifying TB, enabling timely medical action and improving patient outcomes. Additionally, it streamlines the diagnostic process, reducing time and costs for patients and lessening the workload on healthcare professionals. The TB chest X-ray dataset was utilized to train and evaluate the proposed CAD system, which underwent pre-processing techniques like resizing, scaling, and noise removal to improve diagnostic accuracy. Results: The performance of our CAD model was assessed against existing models, yielding excellent results. The model achieved remarkable metrics: an average precision of 99.90%, recall of 99.52%, F1-score of 99.71%, accuracy of 99.84%, false negative rate (FNR) of 0.48%, specificity of 99.52%, and negative predictive value (NPV) of 99.90%. Conclusions: This evaluation highlights the superior performance of our model compared to the latest available classifiers. Full article
Show Figures

Figure 1

18 pages, 6161 KiB  
Article
A Novel Hybrid Model for Automatic Non-Small Cell Lung Cancer Classification Using Histopathological Images
by Oguzhan Katar, Ozal Yildirim, Ru-San Tan and U Rajendra Acharya
Diagnostics 2024, 14(22), 2497; https://doi.org/10.3390/diagnostics14222497 - 8 Nov 2024
Cited by 1 | Viewed by 1989
Abstract
Background/Objectives: Despite recent advances in research, cancer remains a significant public health concern and a leading cause of death. Among all cancer types, lung cancer is the most common cause of cancer-related deaths, with most cases linked to non-small cell lung cancer [...] Read more.
Background/Objectives: Despite recent advances in research, cancer remains a significant public health concern and a leading cause of death. Among all cancer types, lung cancer is the most common cause of cancer-related deaths, with most cases linked to non-small cell lung cancer (NSCLC). Accurate classification of NSCLC subtypes is essential for developing treatment strategies. Medical professionals regard tissue biopsy as the gold standard for the identification of lung cancer subtypes. However, since biopsy images have very high resolutions, manual examination is time-consuming and depends on the pathologist’s expertise. Methods: In this study, we propose a hybrid model to assist pathologists in the classification of NSCLC subtypes from histopathological images. This model processes deep, textural and contextual features obtained by using EfficientNet-B0, local binary pattern (LBP) and vision transformer (ViT) encoder as feature extractors, respectively. In the proposed method, each feature matrix is flattened separately and then combined to form a comprehensive feature vector. The feature vector is given as input to machine learning classifiers to identify the NSCLC subtype. Results: We set up 13 different training scenarios to test 4 different classifiers: support vector machine (SVM), logistic regression (LR), light gradient boosting machine (LightGBM) and extreme gradient boosting (XGBoost). Among these scenarios, we obtained the highest classification accuracy (99.87%) with the combination of EfficientNet-B0 + LBP + ViT Encoder + SVM. The proposed hybrid model significantly enhanced the classification accuracy of NSCLC subtypes. Conclusions: The integration of deep, textural, and contextual features assisted the model in capturing subtle information from the images, thereby reducing the risk of misdiagnosis and facilitating more effective treatment planning. Full article
Show Figures

Figure 1

20 pages, 4128 KiB  
Article
Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image
by Yasemin Çetin-Kaya
Diagnostics 2024, 14(19), 2253; https://doi.org/10.3390/diagnostics14192253 - 9 Oct 2024
Cited by 3 | Viewed by 1547
Abstract
Background: Breast cancer is one of the most lethal cancers among women. Early detection and proper treatment reduce mortality rates. Histopathological images provide detailed information for diagnosing and staging breast cancer disease. Methods: The BreakHis dataset, which includes histopathological images, is [...] Read more.
Background: Breast cancer is one of the most lethal cancers among women. Early detection and proper treatment reduce mortality rates. Histopathological images provide detailed information for diagnosing and staging breast cancer disease. Methods: The BreakHis dataset, which includes histopathological images, is used in this study. Medical images are prone to problems such as different textural backgrounds and overlapping cell structures, unbalanced class distribution, and insufficiently labeled data. In addition to these, the limitations of deep learning models in overfitting and insufficient feature extraction make it extremely difficult to obtain a high-performance model in this dataset. In this study, 20 state-of-the-art models are trained to diagnose eight types of breast cancer using the fine-tuning method. In addition, a comprehensive experimental study was conducted to determine the most successful new model, with 20 different custom models reported. As a result, we propose a novel model called MultiHisNet. Results: The most effective new model, which included a pointwise convolution layer, residual link, channel, and spatial attention module, achieved 94.69% accuracy in multi-class breast cancer classification. An ensemble model was created with the best-performing transfer learning and custom models obtained in the study, and model weights were determined with an Equilibrium Optimizer. The proposed ensemble model achieved 96.71% accuracy in eight-class breast cancer detection. Conclusions: The results show that the proposed model will support pathologists in successfully diagnosing breast cancer. Full article
Show Figures

Figure 1

Review

Jump to: Research

19 pages, 3177 KiB  
Review
Review of In Situ Hybridization (ISH) Stain Images Using Computational Techniques
by Zaka Ur Rehman, Mohammad Faizal Ahmad Fauzi, Wan Siti Halimatul Munirah Wan Ahmad, Fazly Salleh Abas, Phaik Leng Cheah, Seow Fan Chiew and Lai-Meng Looi
Diagnostics 2024, 14(18), 2089; https://doi.org/10.3390/diagnostics14182089 - 21 Sep 2024
Viewed by 2488
Abstract
Recent advancements in medical imaging have greatly enhanced the application of computational techniques in digital pathology, particularly for the classification of breast cancer using in situ hybridization (ISH) imaging. HER2 amplification, a key prognostic marker in 20–25% of breast cancers, can be assessed [...] Read more.
Recent advancements in medical imaging have greatly enhanced the application of computational techniques in digital pathology, particularly for the classification of breast cancer using in situ hybridization (ISH) imaging. HER2 amplification, a key prognostic marker in 20–25% of breast cancers, can be assessed through alterations in gene copy number or protein expression. However, challenges persist due to the heterogeneity of nuclear regions and complexities in cancer biomarker detection. This review examines semi-automated and fully automated computational methods for analyzing ISH images with a focus on HER2 gene amplification. Literature from 1997 to 2023 is analyzed, emphasizing silver-enhanced in situ hybridization (SISH) and its integration with image processing and machine learning techniques. Both conventional machine learning approaches and recent advances in deep learning are compared. The review reveals that automated ISH analysis in combination with bright-field microscopy provides a cost-effective and scalable solution for routine pathology. The integration of deep learning techniques shows promise in improving accuracy over conventional methods, although there are limitations related to data variability and computational demands. Automated ISH analysis can reduce manual labor and increase diagnostic accuracy. Future research should focus on refining these computational methods, particularly in handling the complex nature of HER2 status evaluation, and integrate best practices to further enhance clinical adoption of these techniques. Full article
Show Figures

Figure 1

16 pages, 254 KiB  
Review
Artificial Intelligence in the Diagnosis of Colorectal Cancer: A Literature Review
by Petar Uchikov, Usman Khalid, Krasimir Kraev, Bozhidar Hristov, Maria Kraeva, Tihomir Tenchev, Dzhevdet Chakarov, Milena Sandeva, Snezhanka Dragusheva, Daniela Taneva and Atanas Batashki
Diagnostics 2024, 14(5), 528; https://doi.org/10.3390/diagnostics14050528 - 1 Mar 2024
Cited by 5 | Viewed by 4944
Abstract
Background: The aim of this review is to explore the role of artificial intelligence in the diagnosis of colorectal cancer, how it impacts CRC morbidity and mortality, and why its role in clinical medicine is limited. Methods: A targeted, non-systematic review of the [...] Read more.
Background: The aim of this review is to explore the role of artificial intelligence in the diagnosis of colorectal cancer, how it impacts CRC morbidity and mortality, and why its role in clinical medicine is limited. Methods: A targeted, non-systematic review of the published literature relating to colorectal cancer diagnosis was performed with PubMed databases that were scouted to help provide a more defined understanding of the recent advances regarding artificial intelligence and their impact on colorectal-related morbidity and mortality. Articles were included if deemed relevant and including information associated with the keywords. Results: The advancements in artificial intelligence have been significant in facilitating an earlier diagnosis of CRC. In this review, we focused on evaluating genomic biomarkers, the integration of instruments with artificial intelligence, MR and hyperspectral imaging, and the architecture of neural networks. We found that these neural networks seem practical and yield positive results in initial testing. Furthermore, we explored the use of deep-learning-based majority voting methods, such as bag of words and PAHLI, in improving diagnostic accuracy in colorectal cancer detection. Alongside this, the autonomous and expansive learning ability of artificial intelligence, coupled with its ability to extract increasingly complex features from images or videos without human reliance, highlight its impact in the diagnostic sector. Despite this, as most of the research involves a small sample of patients, a diversification of patient data is needed to enhance cohort stratification for a more sensitive and specific neural model. We also examined the successful application of artificial intelligence in predicting microsatellite instability, showcasing its potential in stratifying patients for targeted therapies. Conclusions: Since its commencement in colorectal cancer, artificial intelligence has revealed a multitude of functionalities and augmentations in the diagnostic sector of CRC. Given its early implementation, its clinical application remains a fair way away, but with steady research dedicated to improving neural architecture and expanding its applicational range, there is hope that these advanced neural software could directly impact the early diagnosis of CRC. The true promise of artificial intelligence, extending beyond the medical sector, lies in its potential to significantly influence the future landscape of CRC’s morbidity and mortality. Full article
Back to TopTop