Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (80)

Search Parameters:
Keywords = whole slide image processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 12567 KB  
Article
Efficient Tissue Detection in Whole-Slide Images Using Classical and Hybrid Methods: Benchmark on TCGA Cancer Cohorts
by Bogdan Ceachi, Filip Muresan, Mihai Trascau and Adina Magda Florea
Cancers 2025, 17(17), 2918; https://doi.org/10.3390/cancers17172918 - 5 Sep 2025
Viewed by 829
Abstract
Background: Whole-slide images (WSIs) are crucial in pathology for digitizing tissue slides, enabling pathologists and AI models to analyze cancer patterns at gigapixel scale. However, their large size incorporates artifacts and non-tissue regions that slow AI processing, consume resources, and introduce errors [...] Read more.
Background: Whole-slide images (WSIs) are crucial in pathology for digitizing tissue slides, enabling pathologists and AI models to analyze cancer patterns at gigapixel scale. However, their large size incorporates artifacts and non-tissue regions that slow AI processing, consume resources, and introduce errors like false positives. Tissue detection serves as the essential first step in WSI pipelines to focus on relevant areas, but deep learning detection methods require extensive manual annotations. Methods: This study benchmarks four thumbnail-level tissue detection methods—Otsu’s thresholding, K-Means clustering, our novel annotation-free Double-Pass hybrid, and GrandQC’s UNet++ on 3322 TCGA WSIs from nine cancer cohorts, evaluating accuracy, speed, and efficiency. Results: Double-Pass achieved an mIoU of 0.826—very close to the deep learning GrandQC model’s 0.871—while processing slides on a CPU in just 0.203 s per slide, markedly faster than GrandQC’s 2.431 s per slide on the same hardware. As an annotation-free, CPU-optimized method, it therefore enables efficient, scalable thumbnail-level tissue detection on standard workstations. Conclusions: The scalable, annotation-free Double-Pass pipeline reduces computational bottlenecks and facilitates high-throughput WSI preprocessing, enabling faster and more cost-effective integration of AI into clinical pathology and research workflows. Comparing Double-Pass against established methods, this benchmark demonstrates its novelty as a fast, robust and annotation-free alternative to supervised methods. Full article
(This article belongs to the Collection Artificial Intelligence and Machine Learning in Cancer Research)
Show Figures

Figure 1

24 pages, 94333 KB  
Article
Medical Segmentation of Kidney Whole Slide Images Using Slicing Aided Hyper Inference and Enhanced Syncretic Mask Merging Optimized by Particle Swarm Metaheuristics
by Marko Mihajlovic and Marina Marjanovic
BioMedInformatics 2025, 5(3), 44; https://doi.org/10.3390/biomedinformatics5030044 - 11 Aug 2025
Viewed by 690
Abstract
Accurate segmentation of kidney microstructures in whole slide images (WSIs) is essential for the diagnosis and monitoring of renal diseases. In this study, an end-to-end instance segmentation pipeline was developed for the detection of glomeruli and blood vessels in hematoxylin and eosin (H&E) [...] Read more.
Accurate segmentation of kidney microstructures in whole slide images (WSIs) is essential for the diagnosis and monitoring of renal diseases. In this study, an end-to-end instance segmentation pipeline was developed for the detection of glomeruli and blood vessels in hematoxylin and eosin (H&E) stained kidney tissue. A tiling-based strategy was employed using Slicing Aided Hyper Inference (SAHI) to manage the resolution and scale of WSIs and the performance of two segmentation models, YOLOv11 and YOLOv12, was comparatively evaluated. The influence of tile overlap ratios on segmentation quality and inference efficiency was assessed, with configurations identified that balance object continuity and computational cost. To address object fragmentation at tile boundaries, an Enhanced Syncretic Mask Merging algorithm was introduced, incorporating morphological and spatial constraints. The algorithm’s hyperparameters were optimized using Particle Swarm Optimization (PSO), with vessel and glomerulus-specific performance targets. The optimization process revealed key parameters affecting segmentation quality, particularly for vessel structures with fine, elongated morphology. When compared with a baseline without postprocessing, improvements in segmentation precision were observed, notably a 48% average increase for glomeruli and up to 17% for blood vessels. The proposed framework demonstrates a balance between accuracy and efficiency, supporting scalable histopathology analysis and contributing to the Vasculature Common Coordinate Framework (VCCF) and Human Reference Atlas (HRA). Full article
Show Figures

Figure 1

20 pages, 4576 KB  
Article
Enhanced HoVerNet Optimization for Precise Nuclei Segmentation in Diffuse Large B-Cell Lymphoma
by Gei Ki Tang, Chee Chin Lim, Faezahtul Arbaeyah Hussain, Qi Wei Oung, Aidy Irman Yajid, Sumayyah Mohammad Azmi and Yen Fook Chong
Diagnostics 2025, 15(15), 1958; https://doi.org/10.3390/diagnostics15151958 - 4 Aug 2025
Viewed by 762
Abstract
Background/Objectives: Diffuse Large B-Cell Lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphoma and demands precise segmentation and classification of nuclei for effective diagnosis and disease severity assessment. This study aims to evaluate the performance of HoVerNet, a deep learning model, [...] Read more.
Background/Objectives: Diffuse Large B-Cell Lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphoma and demands precise segmentation and classification of nuclei for effective diagnosis and disease severity assessment. This study aims to evaluate the performance of HoVerNet, a deep learning model, for nuclei segmentation and classification in CMYC-stained whole slide images and to assess its integration into a user-friendly diagnostic tool. Methods: A dataset of 122 CMYC-stained whole slide images (WSIs) was used. Pre-processing steps, including stain normalization and patch extraction, were applied to improve input consistency. HoVerNet, a multi-branch neural network, was used for both nuclei segmentation and classification, particularly focusing on its ability to manage overlapping nuclei and complex morphological variations. Model performance was validated using metrics such as accuracy, precision, recall, and F1 score. Additionally, a graphic user interface (GUI) was developed to incorporate automated segmentation, cell counting, and severity assessment functionalities. Results: HoVerNet achieved a validation accuracy of 82.5%, with a precision of 85.3%, recall of 82.6%, and an F1 score of 83.9%. The model showed powerful performance in differentiating overlapping and morphologically complex nuclei. The developed GUI enabled real-time visualization and diagnostic support, enhancing the efficiency and usability of DLBCL histopathological analysis. Conclusions: HoVerNet, combined with an integrated GUI, presents a promising approach for streamlining DLBCL diagnostics through accurate segmentation and real-time visualization. Future work will focus on incorporating Vision Transformers and additional staining protocols to improve generalizability and clinical utility. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Radiomics in Medical Diagnosis)
Show Figures

Figure 1

15 pages, 16898 KB  
Article
Cross-Scale Hypergraph Neural Networks with Inter–Intra Constraints for Mitosis Detection
by Jincheng Li, Danyang Dong, Yihui Zhan, Guanren Zhu, Hengshuo Zhang, Xing Xie and Lingling Yang
Sensors 2025, 25(14), 4359; https://doi.org/10.3390/s25144359 - 12 Jul 2025
Viewed by 705
Abstract
Mitotic figures in tumor tissues are an important criterion for diagnosing malignant lesions, and physicians often search for the presence of mitosis in whole slide imaging (WSI). However, prolonged visual inspection by doctors may increase the likelihood of human error. With the advancement [...] Read more.
Mitotic figures in tumor tissues are an important criterion for diagnosing malignant lesions, and physicians often search for the presence of mitosis in whole slide imaging (WSI). However, prolonged visual inspection by doctors may increase the likelihood of human error. With the advancement of deep learning, AI-based automatic cytopathological diagnosis has been increasingly applied in clinical settings. Nevertheless, existing diagnostic models often suffer from high computational costs and suboptimal detection accuracy. More importantly, when assessing cellular abnormalities, doctors frequently compare target cells with their surrounding cells—an aspect that current models fail to capture due to their lack of intercellular information modeling, leading to the loss of critical medical insights. To address these limitations, we conducted an in-depth analysis of existing models and propose an Inter–Intra Hypergraph Neural Network (II-HGNN). Our model introduces a block-based feature extraction mechanism to efficiently capture deep representations. Additionally, we leverage hypergraph convolutional networks to process both intracellular and intercellular information, leading to more precise diagnostic outcomes. We evaluate our model on publicly available datasets under varying imaging conditions, and experimental results demonstrate that our approach consistently outperforms baseline models in terms of accuracy. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Imaging Sensors and Processing)
Show Figures

Figure 1

19 pages, 4046 KB  
Article
Quercetin-Fortified Animal Forage from Onion Waste: A Zero-Waste Approach to Bioactive Feed Development
by Janusz Wojtczak, Krystyna Szymandera-Buszka, Joanna Kobus-Cisowska, Kinga Stuper-Szablewska, Jarosław Jakubowicz, Grzegorz Fiutak, Joanna Zeyland and Maciej Jarzębski
Appl. Sci. 2025, 15(14), 7694; https://doi.org/10.3390/app15147694 - 9 Jul 2025
Viewed by 1032
Abstract
There is a high demand for the development of new carriers for pharmaceutical forms for human, veterinary, and animal-feeding use. One of the solutions might be bioactive compound-loading pellets for animal forage. The aim of the work was to assess the physical and [...] Read more.
There is a high demand for the development of new carriers for pharmaceutical forms for human, veterinary, and animal-feeding use. One of the solutions might be bioactive compound-loading pellets for animal forage. The aim of the work was to assess the physical and sensory properties of forage with the addition of onion peel and off-spec onions as a source of quercetin. The feed was prepared using an expanding process (thermal–mechanical expanding process). Quercetin content was evaluated in raw onion and in final-product feed mixture samples (before and after expanding, and pelleting). The obtained feed was subjected to sensory analysis, testing for expanded pellet uniformity, water absorption index (WAI), the angle of a slide, and antioxidant activity. The results confirmed a high recovery of the quercetin after the expanding process (approximately 80%), and a significantly reduced intensity of onion odor, which was confirmed compared to the non-expanded onion, which is beneficial. Furthermore, digital and optical microscopy were applied for structure analysis. Microscopic imaging results confirmed that the onion structures were visible in the whole length of feed material and analyzed cross-sections. The results can be an introduction to further research on developing products that use the expanding and pelleting process to exploit the peel and off-spec onions, as well as other waste raw materials. Full article
Show Figures

Figure 1

18 pages, 7107 KB  
Article
Scalable Nuclei Detection in HER2-SISH Whole Slide Images via Fine-Tuned Stardist with Expert-Annotated Regions of Interest
by Zaka Ur Rehman, Mohammad Faizal Ahmad Fauzi, Wan Siti Halimatul Munirah Wan Ahmad, Fazly Salleh Abas, Phaik-Leng Cheah, Seow-Fan Chiew and Lai-Meng Looi
Diagnostics 2025, 15(13), 1584; https://doi.org/10.3390/diagnostics15131584 - 22 Jun 2025
Viewed by 873
Abstract
Background: Breast cancer remains a critical health concern worldwide, with histopathological analysis of tissue biopsies serving as the clinical gold standard for diagnosis. Manual evaluation of histopathology images is time-intensive and requires specialized expertise, often resulting in variability in diagnostic outcomes. In silver [...] Read more.
Background: Breast cancer remains a critical health concern worldwide, with histopathological analysis of tissue biopsies serving as the clinical gold standard for diagnosis. Manual evaluation of histopathology images is time-intensive and requires specialized expertise, often resulting in variability in diagnostic outcomes. In silver in situ hybridization (SISH) images, accurate nuclei detection is essential for precise histo-scoring of HER2 gene expression, directly impacting treatment decisions. Methods: This study presents a scalable and automated deep learning framework for nuclei detection in HER2-SISH whole slide images (WSIs), utilizing a novel dataset of 100 expert-marked regions extracted from 20 WSIs collected at the University of Malaya Medical Center (UMMC). The proposed two-stage approach combines a pretrained Stardist model with image processing-based annotations, followed by fine tuning on our domain-specific dataset to improve generalization. Results: The fine-tuned model achieved substantial improvements over both the pretrained Stardist model and a conventional watershed segmentation baseline. Quantitatively, the proposed method attained an average F1-score of 98.1% for visual assessments and 97.4% for expert-marked nuclei, outperforming baseline methods across all metrics. Additionally, training and validation performance curves demonstrate stable model convergence over 100 epochs. Conclusions: These results highlight the robustness of our approach in handling the complex morphological characteristics of SISH-stained nuclei. Our framework supports pathologists by offering reliable, automated nuclei detection in HER2 scoring workflows, contributing to diagnostic consistency and efficiency in clinical pathology. Full article
Show Figures

Figure 1

16 pages, 5313 KB  
Article
AI-Powered Spectral Imaging for Virtual Pathology Staining
by Adam Soker, Maya Almagor, Sabine Mai and Yuval Garini
Bioengineering 2025, 12(6), 655; https://doi.org/10.3390/bioengineering12060655 - 15 Jun 2025
Cited by 1 | Viewed by 2041
Abstract
Pathological analysis of tissue biopsies remains the gold standard for diagnosing cancer and other diseases. However, this is a time-intensive process that demands extensive training and expertise. Despite its importance, it is often subjective and not entirely error-free. Over the past decade, pathology [...] Read more.
Pathological analysis of tissue biopsies remains the gold standard for diagnosing cancer and other diseases. However, this is a time-intensive process that demands extensive training and expertise. Despite its importance, it is often subjective and not entirely error-free. Over the past decade, pathology has undergone two major transformations. First, the rise in whole slide imaging has enabled work in front of a computer screen and the integration of image processing tools to enhance diagnostics. Second, the rapid evolution of Artificial Intelligence has revolutionized numerous fields and has had a remarkable impact on humanity. The synergy of these two has paved the way for groundbreaking research aiming for advancements in digital pathology. Despite encouraging research outcomes, AI-based tools have yet to be actively incorporated into therapeutic protocols. This is primary due to the need for high reliability in medical therapy, necessitating a new approach that ensures greater robustness. Another approach for improving pathological diagnosis involves advanced optical methods such as spectral imaging, which reveals information from the tissue that is beyond human vision. We have recently developed a unique rapid spectral imaging system capable of scanning pathological slides, delivering a wealth of critical diagnostic information. Here, we present a novel application of spectral imaging (SI) for virtual Hematoxylin and Eosin (H&E) staining using a custom-built, rapid Fourier-based SI system. Unstained human biopsy samples are scanned, and a Pix2Pix-based neural network generates realistic H&E-equivalent images. Additionally, we applied Principal Component Analysis (PCA) to the spectral information to examine the effect of down sampling the data on the virtual staining process. To assess model performance, we trained and tested models using full spectral data, RGB, and PCA-reduced spectral inputs. The results demonstrate that PCA-reduced data preserved essential image features while enhancing statistical image quality, as indicated by FID and KID scores, and reducing computational complexity. These findings highlight the potential of integrating SI and AI to enable efficient, accurate, and stain-free digital pathology. Full article
Show Figures

Figure 1

16 pages, 2065 KB  
Article
An Information-Extreme Algorithm for Universal Nuclear Feature-Driven Automated Classification of Breast Cancer Cells
by Taras Savchenko, Ruslana Lakhtaryna, Anastasiia Denysenko, Anatoliy Dovbysh, Sarah E. Coupland and Roman Moskalenko
Diagnostics 2025, 15(11), 1389; https://doi.org/10.3390/diagnostics15111389 - 30 May 2025
Viewed by 766
Abstract
Background/Objectives: Breast cancer diagnosis heavily relies on histopathological assessment, which is prone to subjectivity and inefficiency, especially with whole-slide imaging (WSI). This study addressed these limitations by developing an automated breast cancer cell classification algorithm using an information-extreme machine learning approach and universal [...] Read more.
Background/Objectives: Breast cancer diagnosis heavily relies on histopathological assessment, which is prone to subjectivity and inefficiency, especially with whole-slide imaging (WSI). This study addressed these limitations by developing an automated breast cancer cell classification algorithm using an information-extreme machine learning approach and universal cytological features, aiming for objective and generalized histopathological diagnosis. Methods: Digitized histological images were processed to identify hyperchromatic cells. A set of 21 cytological features (10 geometric and 11 textural), chosen for their potential universality across cancers, were extracted from individual cells. These features were then used to classify cells as normal or malignant using an information-extreme algorithm. This algorithm optimizes an information criterion within a binary Hamming space to achieve robust recognition with minimal input features. The architectural innovation lies in the application of this information-extreme approach to cytological feature analysis for cancer cell classification. Results: The algorithm’s functional efficiency was evaluated on a dataset of 176 labeled cell images, yielding promising results: an accuracy of 89%, a precision of 85%, a recall of 84%, and an F1-score of 88%. These metrics demonstrate a balanced and effective model for automated breast cancer cell classification. Conclusions: The proposed information-extreme algorithm utilizing universal cytological features offers a potentially objective and computationally efficient alternative to traditional methods and may mitigate some limitations of deep learning in histopathological analysis. Future work will focus on validating the algorithm on larger datasets and exploring its applicability to other cancer types. Full article
Show Figures

Figure 1

22 pages, 2133 KB  
Article
Classification of Whole-Slide Pathology Images Based on State Space Models and Graph Neural Networks
by Feng Ding, Chengfei Cai, Jun Li, Mingxin Liu, Yiping Jiao, Zhengcan Wu and Jun Xu
Electronics 2025, 14(10), 2056; https://doi.org/10.3390/electronics14102056 - 19 May 2025
Viewed by 1303
Abstract
Whole-slide images (WSIs) pose significant analytical challenges due to their large data scale and complexity. Multiple instance learning (MIL) has emerged as an effective solution for WSI classification, but existing frameworks often lack flexibility in feature integration and underutilize sequential information. To address [...] Read more.
Whole-slide images (WSIs) pose significant analytical challenges due to their large data scale and complexity. Multiple instance learning (MIL) has emerged as an effective solution for WSI classification, but existing frameworks often lack flexibility in feature integration and underutilize sequential information. To address these limitations, this work proposes a novel MIL framework: Dynamic Graph and State Space Model-Based MIL (DG-SSM-MIL). DG-SSM-MIL combines graph neural networks and selective state space models, leveraging the former’s ability to extract local and spatial features and the latter’s advantage in comprehensively understanding long-sequence instances. This enhances the model’s performance in diverse instance classification, improves its capability to handle long-sequence data, and increases the precision and scalability of feature fusion. We propose the Dynamic Graph and State Space Model (DynGraph-SSM) module, which aggregates local and spatial information of image patches through directed graphs and learns global feature representations using the Mamba model. Additionally, the directed graph structure alleviates the unidirectional scanning limitation of Mamba and enhances its ability to process pathological images with dispersed lesion distributions. DG-SSM-MIL demonstrates superior performance in classification tasks compared to other models. We validate the effectiveness of the proposed method on features extracted from two pretrained models across four public medical image datasets: BRACS, TCGA-NSCLC, TCGA-RCC, and CAMELYON16. Experimental results demonstrate that DG-SSM-MIL consistently outperforms existing MIL methods across four public datasets. For example, when using ResNet-50 features, our model achieves the highest AUCs of 0.936, 0.785, 0.879, and 0.957 on TCGA-NSCLC, BRACS, CAMELYON16, and TCGA-RCC, respectively. Similarly, with UNI features, DG-SSM-MIL reaches AUCs of 0.968, 0.846, 0.993, and 0.990, surpassing all baselines. These results confirm the effectiveness and generalizability of our approach in diverse WSI classification tasks. Full article
(This article belongs to the Special Issue AI-Driven Medical Image/Video Processing)
Show Figures

Figure 1

19 pages, 25009 KB  
Article
Automated Cervical Cancer Screening Framework: Leveraging Object Detection and Multi-Objective Optimization for Interpretable Diagnostic Rules
by Weijian Ye and Binghao Dai
Electronics 2025, 14(10), 2014; https://doi.org/10.3390/electronics14102014 - 15 May 2025
Viewed by 701
Abstract
Cervical cancer is one of the most common malignant tumors, with high incidence and mortality rates. Recent studies mainly adopt Artificial Intelligence (AI) models to detect cervical cells. Yet, due to the imperceptible symptoms of cervical cells, there are three problems that may [...] Read more.
Cervical cancer is one of the most common malignant tumors, with high incidence and mortality rates. Recent studies mainly adopt Artificial Intelligence (AI) models to detect cervical cells. Yet, due to the imperceptible symptoms of cervical cells, there are three problems that may hinder the performance of the existing approaches: (a) poor quality of the whole-slide image (WSI) performed on cervical cells may lead to undesirable performance; (b) several types of abnormal cervical cells are involved in the progression of cervical cells from normal to cancer, which requires extensive clinical data for training; and (c) the diagnosis of the WSI is medical-rule-driven and requires the AI model to provide interpretability. To address these issues, we propose an integrated automatic cervical cancer screening (IACCS) framework. First, the IACCS framework incorporates a quality assessment module utilizing binarization-based cell counting and a Support Vector Machine (SVM) approach to identify fuzzy regions, ensuring WSI suitability for analysis. Second, to overcome the data limitations, the framework employs data enhancement techniques alongside incremental learning (IL) and active learning (AL) mechanisms, allowing the model to adapt progressively and learn efficiently from new data and expert feedback. Third, recognizing the need for interpretability, the diagnostic decision process is modeled as a multi-objective optimization problem. A multi-objective optimization algorithm is used to generate a set of interpretable diagnostic rules that offer explicit trade-offs between sensitivity and specificity. Extensive experiments demonstrate the effectiveness of the proposed IACCS framework. Applying our comprehensive framework yielded significant improvements in detection accuracy, achieving, for example, a 6.34% increase in mAP50:95 compared to the baseline YOLOv8 model. Furthermore, the generated Pareto-optimal diagnostic rules provide superior and more flexible diagnostic options compared to traditional manually defined rules. This research presents a validated pathway towards more robust, adaptable, and interpretable AI-assisted cervical cancer screening. Full article
Show Figures

Figure 1

24 pages, 718 KB  
Systematic Review
Advanced Deep Learning Approaches in Detection Technologies for Comprehensive Breast Cancer Assessment Based on WSIs: A Systematic Literature Review
by Qiaoyi Xu, Afzan Adam, Azizi Abdullah and Nurkhairul Bariyah
Diagnostics 2025, 15(9), 1150; https://doi.org/10.3390/diagnostics15091150 - 30 Apr 2025
Cited by 1 | Viewed by 1511
Abstract
Background: Breast cancer is one of the leading causes of death among women worldwide. Accurate early detection of lymphocytes and molecular biomarkers is essential for improving diagnostic precision and patient prognosis. Whole slide images (WSIs) are central to digital pathology workflows in breast [...] Read more.
Background: Breast cancer is one of the leading causes of death among women worldwide. Accurate early detection of lymphocytes and molecular biomarkers is essential for improving diagnostic precision and patient prognosis. Whole slide images (WSIs) are central to digital pathology workflows in breast cancer assessment. However, applying deep learning techniques to WSIs presents persistent challenges, including variability in image quality, limited availability of high-quality annotations, poor model interpretability, high computational demands, and suboptimal processing efficiency. Methods: This systematic review, guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), examines deep learning-based detection methods for breast cancer published between 2020 and 2024. The analysis includes 39 peer-reviewed studies and 20 widely used WSI datasets. Results: To enhance clinical relevance and guide model development, this study introduces a five-dimensional evaluation framework covering accuracy and performance, robustness and generalization, interpretability, computational efficiency, and annotation quality. The framework facilitates a balanced and clinically aligned assessment of both established methods and recent innovations. Conclusions: This review offers a comprehensive analysis and proposes a practical roadmap for addressing core challenges in WSI-based breast cancer detection. It fills a critical gap in the literature and provides actionable guidance for researchers, clinicians, and developers seeking to optimize and translate WSI-based technologies into clinical workflows for comprehensive breast cancer assessment. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Medicine)
Show Figures

Figure 1

15 pages, 2995 KB  
Article
Assessment of Tumor Infiltrating Lymphocytes in Predicting Stereotactic Ablative Radiotherapy (SABR) Response in Unresectable Breast Cancer
by Mateusz Bielecki, Khadijeh Saednia, Fang-I Lu, Shely Kagan, Danny Vesprini, Katarzyna J. Jerzak, Roberto Salgado, Raffi Karshafian and William T. Tran
Radiation 2025, 5(2), 11; https://doi.org/10.3390/radiation5020011 - 2 Apr 2025
Viewed by 2289
Abstract
Background: Patients with advanced breast cancer (BC) may be treated with stereotactic ablative radiotherapy (SABR) for tumor control. Variable treatment responses are a clinical challenge and there is a need to predict tumor radiosensitivity a priori. There is evidence showing that tumor infiltrating [...] Read more.
Background: Patients with advanced breast cancer (BC) may be treated with stereotactic ablative radiotherapy (SABR) for tumor control. Variable treatment responses are a clinical challenge and there is a need to predict tumor radiosensitivity a priori. There is evidence showing that tumor infiltrating lymphocytes (TILs) are markers for chemotherapy response; however, this association has not yet been validated in breast radiation therapy. This pilot study investigates the computational analysis of TILs to predict SABR response in patients with inoperable BC. Methods: Patients with inoperable breast cancer (n = 22) were included for analysis and classified into partial response (n = 12) and stable disease (n = 10) groups. Pre-treatment tumor biopsies (n = 104) were prepared, digitally imaged, and underwent computational analysis. Whole slide images (WSIs) were pre-processed, and then a pre-trained convolutional neural network model (CNN) was employed to identify the regions of interest. The TILs were annotated, and spatial graph features were extracted. The clinical and spatial features were collected and analyzed using machine learning (ML) classifiers, including K-nearest neighbor (KNN), support vector machines (SVMs), and Gaussian Naïve Bayes (GNB), to predict the SABR response. The models were evaluated using receiver operator characteristics (ROCs) and area under the curve (AUC) analysis. Results: The KNN, SVM, and GNB models were implemented using clinical and graph features. Among the generated prediction models, the graph features showed higher predictive performances compared to the models containing clinical features alone. The highest-performing model, using computationally derived graph features, showed an AUC of 0.92, while the highest clinical model showed an AUC of 0.62 within unseen test sets. Conclusions: Spatial TIL models demonstrate strong potential for predicting SABR response in inoperable breast cancer. TILs indicate a higher independent predictive performance than clinical-level features alone. Full article
Show Figures

Figure 1

35 pages, 3010 KB  
Review
Advancements in Digital Cytopathology Since COVID-19: Insights from a Narrative Review of Review Articles
by Daniele Giansanti
Healthcare 2025, 13(6), 657; https://doi.org/10.3390/healthcare13060657 - 17 Mar 2025
Cited by 3 | Viewed by 1595
Abstract
Background/Objectives: The integration of digitalization in cytopathology is an emerging field with transformative potential, aiming to enhance diagnostic precision and operational efficiency. This narrative review of reviews (NRR) seeks to identify prevailing themes, opportunities, challenges, and recommendations related to the process of [...] Read more.
Background/Objectives: The integration of digitalization in cytopathology is an emerging field with transformative potential, aiming to enhance diagnostic precision and operational efficiency. This narrative review of reviews (NRR) seeks to identify prevailing themes, opportunities, challenges, and recommendations related to the process of digitalization in cytopathology. Methods: Utilizing a standardized checklist and quality control procedures, this review examines recent advancements and future implications in this domain. Twenty-one review studies were selected through a systematic process. Results: The results highlight key emerging trends, themes, opportunities, challenges, and recommendations in digital cytopathology. First, the study identifies pivotal themes that reflect the ongoing technological transformation, guiding future focus areas in the field. A major trend is the integration of artificial intelligence (AI), which is increasingly critical in improving diagnostic accuracy, streamlining workflows, and assisting decision making. Notably, emerging AI technologies like large language models (LLMs) and chatbots are expected to provide real-time support and automate tasks, though concerns around ethics and privacy must be addressed. The reviews also emphasize the need for standardized protocols, comprehensive training, and rigorous validation to ensure AI tools are reliable and effective across clinical settings. Lastly, digital cytopathology holds significant potential to improve healthcare accessibility, especially in remote areas, by enabling faster, more efficient diagnoses and fostering global collaboration through telepathology. Conclusions: Overall, this study highlights the transformative impact of digitalization in cytopathology, improving diagnostic accuracy, efficiency, and global accessibility through tools like whole-slide imaging and telepathology. While artificial intelligence plays a significant role, the broader focus is on integrating digital solutions to enhance workflows and collaboration. Addressing challenges such as standardization, training, and ethical considerations is crucial to fully realize the potential of these advancements. Full article
(This article belongs to the Section Digital Health Technologies)
Show Figures

Figure 1

20 pages, 3066 KB  
Article
GeNetFormer: Transformer-Based Framework for Gene Expression Prediction in Breast Cancer
by Oumeima Thaalbi and Moulay A. Akhloufi
AI 2025, 6(3), 43; https://doi.org/10.3390/ai6030043 - 21 Feb 2025
Cited by 2 | Viewed by 2480
Abstract
Background: Histopathological images are often used to diagnose breast cancer and have shown high accuracy in classifying cancer subtypes. Prediction of gene expression from whole-slide images and spatial transcriptomics data is important for cancer treatment in general and breast cancer in particular. This [...] Read more.
Background: Histopathological images are often used to diagnose breast cancer and have shown high accuracy in classifying cancer subtypes. Prediction of gene expression from whole-slide images and spatial transcriptomics data is important for cancer treatment in general and breast cancer in particular. This topic has been a challenge in numerous studies. Method: In this study, we present a deep learning framework called GeNetFormer. We evaluated eight advanced transformer models including EfficientFormer, FasterViT, BEiT v2, and Swin Transformer v2, and tested their performance in predicting gene expression using the STNet dataset. This dataset contains 68 H&E-stained histology images and transcriptomics data from different types of breast cancer. We followed a detailed process to prepare the data, including filtering genes and spots, normalizing stain colors, and creating smaller image patches for training. The models were trained to predict the expression of 250 genes using different image sizes and loss functions. GeNetFormer achieved the best performance using the MSELoss function and a resolution of 256 × 256 while integrating EfficientFormer. Results: It predicted nine out of the top ten genes with a higher Pearson Correlation Coefficient (PCC) compared to the retrained ST-Net method. For cancer biomarker genes such as DDX5 and XBP1, the PCC values were 0.7450 and 0.7203, respectively, outperforming ST-Net, which scored 0.6713 and 0.7320, respectively. In addition, our method gave better predictions for other genes such as FASN (0.7018 vs. 0.6968) and ERBB2 (0.6241 vs. 0.6211). Conclusions: Our results show that GeNetFormer provides improvements over other models such as ST-Net and show how transformer architectures are capable of analyzing spatial transcriptomics data to advance cancer research. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

17 pages, 16539 KB  
Article
A Novel Framework for Whole-Slide Pathological Image Classification Based on the Cascaded Attention Mechanism
by Dehua Liu and Bin Hu
Sensors 2025, 25(3), 726; https://doi.org/10.3390/s25030726 - 25 Jan 2025
Cited by 1 | Viewed by 1949
Abstract
This study introduces an innovative deep learning framework to address the limitations of traditional pathological image analysis and the pressing demand for medical resources in tumor diagnosis. With the global rise in cancer cases, manual examination by pathologists is increasingly inadequate, being both [...] Read more.
This study introduces an innovative deep learning framework to address the limitations of traditional pathological image analysis and the pressing demand for medical resources in tumor diagnosis. With the global rise in cancer cases, manual examination by pathologists is increasingly inadequate, being both time-consuming and subject to the scarcity of professionals and individual subjectivity, thus impacting diagnostic accuracy and efficiency. Deep learning, particularly in computer vision, offers significant potential to mitigate these challenges. Automated models can rapidly and accurately process large datasets, revolutionizing tumor detection and classification. However, existing methods often rely on single attention mechanisms, failing to fully exploit the complexity of pathological images, especially in extracting critical features from whole-slide images. We developed a framework incorporating a cascaded attention mechanism, enhancing meaningful pattern recognition while suppressing irrelevant background information. Experiments on the Camelyon16 dataset demonstrate superior classification accuracy, model generalization, and result interpretability compared to state-of-the-art techniques. This advancement promises to enhance diagnostic efficiency, reduce healthcare costs, and improve patient outcomes. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Back to TopTop