Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = prostate zonal segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 4394 KiB  
Article
Neural Network Models for Prostate Zones Segmentation in Magnetic Resonance Imaging
by Saman Fouladi, Luca Di Palma, Fatemeh Darvizeh, Deborah Fazzini, Alessandro Maiocchi, Sergio Papa, Gabriele Gianini and Marco Alì
Information 2025, 16(3), 186; https://doi.org/10.3390/info16030186 - 28 Feb 2025
Viewed by 962
Abstract
Prostate cancer (PCa) is one of the most common tumors diagnosed in men worldwide, with approximately 1.7 million new cases expected by 2030. Most cancerous lesions in PCa are located in the peripheral zone (PZ); therefore, accurate identification of the location of the [...] Read more.
Prostate cancer (PCa) is one of the most common tumors diagnosed in men worldwide, with approximately 1.7 million new cases expected by 2030. Most cancerous lesions in PCa are located in the peripheral zone (PZ); therefore, accurate identification of the location of the lesion is essential for effective diagnosis and treatment. Zonal segmentation in magnetic resonance imaging (MRI) scans is critical and plays a key role in pinpointing cancerous regions and treatment strategies. In this work, we report on the development of three advanced neural network-based models: one based on ensemble learning, one on Meta-Net, and one on YOLO-V8. They were tailored for the segmentation of the central gland (CG) and PZ using a small dataset of 90 MRI scans for training, 25 MRIs for validation, and 24 scans for testing. The ensemble learning method, combining U-Net-based models (Attention-Res-U-Net, Vanilla-Net, and V-Net), achieved an IoU of 79.3% and DSC of 88.4% for CG and an IoU of 54.5% and DSC of 70.5% for PZ on the test set. Meta-Net, used for the first time in segmentation, demonstrated an IoU of 78% and DSC of 88% for CG, while YOLO-V8 outperformed both models with an IoU of 80% and DSC of 89% for CG and an IoU of 58% and DSC of 73% for PZ. Full article
(This article belongs to the Special Issue Detection and Modelling of Biosignals)
Show Figures

Figure 1

16 pages, 5420 KiB  
Article
A Comparative Analysis of U-Net and Vision Transformer Architectures in Semi-Supervised Prostate Zonal Segmentation
by Guantian Huang, Bixuan Xia, Haoming Zhuang, Bohan Yan, Cheng Wei, Shouliang Qi, Wei Qian and Dianning He
Bioengineering 2024, 11(9), 865; https://doi.org/10.3390/bioengineering11090865 - 26 Aug 2024
Cited by 1 | Viewed by 2973
Abstract
The precise segmentation of different regions of the prostate is crucial in the diagnosis and treatment of prostate-related diseases. However, the scarcity of labeled prostate data poses a challenge for the accurate segmentation of its different regions. We perform the segmentation of different [...] Read more.
The precise segmentation of different regions of the prostate is crucial in the diagnosis and treatment of prostate-related diseases. However, the scarcity of labeled prostate data poses a challenge for the accurate segmentation of its different regions. We perform the segmentation of different regions of the prostate using U-Net- and Vision Transformer (ViT)-based architectures. We use five semi-supervised learning methods, including entropy minimization, cross pseudo-supervision, mean teacher, uncertainty-aware mean teacher (UAMT), and interpolation consistency training (ICT) to compare the results with the state-of-the-art prostate semi-supervised segmentation network uncertainty-aware temporal self-learning (UATS). The UAMT method improves the prostate segmentation accuracy and provides stable prostate region segmentation results. ICT plays a more stable role in the prostate region segmentation results, which provides strong support for the medical image segmentation task, and demonstrates the robustness of U-Net for medical image segmentation. UATS is still more applicable to the U-Net backbone and has a very significant effect on a positive prediction rate. However, the performance of ViT in combination with semi-supervision still requires further optimization. This comparative analysis applies various semi-supervised learning methods to prostate zonal segmentation. It guides future prostate segmentation developments and offers insights into utilizing limited labeled data in medical imaging. Full article
Show Figures

Figure 1

17 pages, 4365 KiB  
Article
Joint Cancer Segmentation and PI-RADS Classification on Multiparametric MRI Using MiniSegCaps Network
by Wenting Jiang, Yingying Lin, Varut Vardhanabhuti, Yanzhen Ming and Peng Cao
Diagnostics 2023, 13(4), 615; https://doi.org/10.3390/diagnostics13040615 - 7 Feb 2023
Cited by 3 | Viewed by 2794
Abstract
MRI is the primary imaging approach for diagnosing prostate cancer. Prostate Imaging Reporting and Data System (PI-RADS) on multiparametric MRI (mpMRI) provides fundamental MRI interpretation guidelines but suffers from inter-reader variability. Deep learning networks show great promise in automatic lesion segmentation and classification, [...] Read more.
MRI is the primary imaging approach for diagnosing prostate cancer. Prostate Imaging Reporting and Data System (PI-RADS) on multiparametric MRI (mpMRI) provides fundamental MRI interpretation guidelines but suffers from inter-reader variability. Deep learning networks show great promise in automatic lesion segmentation and classification, which help to ease the burden on radiologists and reduce inter-reader variability. In this study, we proposed a novel multi-branch network, MiniSegCaps, for prostate cancer segmentation and PI-RADS classification on mpMRI. MiniSeg branch outputted the segmentation in conjunction with PI-RADS prediction, guided by the attention map from the CapsuleNet. CapsuleNet branch exploited the relative spatial information of prostate cancer to anatomical structures, such as the zonal location of the lesion, which also reduced the sample size requirement in training due to its equivariance properties. In addition, a gated recurrent unit (GRU) is adopted to exploit spatial knowledge across slices, improving through-plane consistency. Based on the clinical reports, we established a prostate mpMRI database from 462 patients paired with radiologically estimated annotations. MiniSegCaps was trained and evaluated with fivefold cross-validation. On 93 testing cases, our model achieved a 0.712 dice coefficient on lesion segmentation, 89.18% accuracy, and 92.52% sensitivity on PI-RADS classification (PI-RADS ≥ 4) in patient-level evaluation, significantly outperforming existing methods. In addition, a graphical user interface (GUI) integrated into the clinical workflow can automatically produce diagnosis reports based on the results from MiniSegCaps. Full article
Show Figures

Figure 1

Back to TopTop