Next Article in Journal
Application of Ultrasonic Guided Waves for Inspection of High Density Polyethylene Pipe Systems
Next Article in Special Issue
Relationship between the Presence of the ApoE ε4 Allele and EEG Complexity along the Alzheimer’s Disease Continuum
Previous Article in Journal
PortWeather: A Lightweight Onboard Solution for Real-Time Weather Prediction
Previous Article in Special Issue
Exploration of User’s Mental State Changes during Performing Brain–Computer Interface
Open AccessArticle

Evaluation of Deep Neural Networks for Semantic Segmentation of Prostate in T2W MRI

1
Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia
2
ImViA/ITFIM, University of Burgundy, 21078 Dijon, France
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(11), 3183; https://doi.org/10.3390/s20113183
Received: 9 February 2020 / Revised: 4 April 2020 / Accepted: 12 April 2020 / Published: 3 June 2020
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
In this paper, we present an evaluation of four encoder–decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of road scene, biomedical, and natural images. Segmentation of prostate in T2W MRI images is an important step in the automatic diagnosis of prostate cancer to enable better lesion detection and staging of prostate cancer. Therefore, many research efforts have been conducted to improve the segmentation of the prostate gland in MRI images. The main challenges of prostate gland segmentation are blurry prostate boundary and variability in prostate anatomical structure. In this work, we investigated the performance of encoder–decoder CNNs for segmentation of prostate gland in T2W MRI. Image pre-processing techniques including image resizing, center-cropping and intensity normalization are applied to address the issues of inter-patient and inter-scanner variability as well as the issue of dominating background pixels over prostate pixels. In addition, to enrich the network with more data, to increase data variation, and to improve its accuracy, patch extraction and data augmentation are applied prior to training the networks. Furthermore, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the prostate pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The performance of the CNNs is evaluated in terms of the Dice similarity coefficient (DSC) and our experimental results show that patch-wise DeepLabV3+ gives the best performance with DSC equal to 92.8 % . This value is the highest DSC score compared to the FCN, SegNet, and U-Net that also competed the recently published state-of-the-art method of prostate segmentation. View Full-Text
Keywords: encoder–decoder; CNNs; DNN; FCN; SegNet; U-Net; DeepLabV3+ encoder–decoder; CNNs; DNN; FCN; SegNet; U-Net; DeepLabV3+
Show Figures

Figure 1

MDPI and ACS Style

Khan, Z.; Yahya, N.; Alsaih, K.; Ali, S.S.A.; Meriaudeau, F. Evaluation of Deep Neural Networks for Semantic Segmentation of Prostate in T2W MRI. Sensors 2020, 20, 3183.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop