Next Article in Journal
Knee Capsule Anatomy: An MR Imaging and Cadaveric Study
Next Article in Special Issue
Deep Learning-Based Artificial Intelligence System for Automatic Assessment of Glomerular Pathological Findings in Lupus Nephritis
Previous Article in Journal
Development of Quantitative Rapid Isothermal Amplification Assay for Leishmania donovani
Previous Article in Special Issue
Analysis of Brain MRI Images Using Improved CornerNet Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges

1
Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK
2
Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan
3
Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
*
Author to whom correspondence should be addressed.
Diagnostics 2021, 11(11), 1964; https://doi.org/10.3390/diagnostics11111964
Submission received: 9 August 2021 / Revised: 14 October 2021 / Accepted: 19 October 2021 / Published: 22 October 2021
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)

Abstract

:
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.

1. Introduction

Owning to the recent rise of high-resolution imaging modalities such as X-ray computed tomography (CT) and magnetic resonance imaging (MRI), medical practitioners rely on spatial visualization of internal organs to evaluate disease and make timely clinical decisions. Even though radiological assessment of imaging studies is still largely visual and based on domain knowledge and expertise, there is an increasing shift towards quantitative and volumetric disease assessment for precision medicine [1,2]. This step requires accurate tissue segmentation, which can improve disease characterization through detection and division of abnormalities on images into semantically, biologically and/or clinically meaningful regions based on quantitative imaging measurements.
MRI is increasingly used for the diagnosis, staging and treatment response evaluations of pelvic cancers. With advancing imaging technologies and computer processing hardware, imaging diagnostics for cancer disease characterization, treatment assessment and patient follow-up are evolving. Quantitative imaging techniques are showing promise in providing information that can enhance the understanding of diseases and support patient care. For instance, multi-parametric MRI that combines one or more functional MR sequences is now widely used for pelvic tumors. Recently, diffusion-weighted (DW) MRI has become widely regarded as a reliable quantitative imaging technique that can provide more sensitive disease detection for the early assessment of treatment response [3]. Additionally, magnetic resonance fingerprinting (MRF) [4] has encouraged developments towards simultaneous assessment of quantitative tissue MR relaxivity.
In radiation oncology, the segmentation of organs-at-risk (OARs) and target volumes are necessary steps to aid the planning of optimal dose delivery to tumors while avoiding delivering toxicity to surrounding healthy tissues. Accurate segmentation of these structures is also vital during radiotherapy (RT) for effective image-guided treatment.
Radiomics, an image analysis approach, aims to provide additional insight from scan images that may not be fully appreciated by the human eye. It has shown potential in detecting distinct imaging phenotypes as indicators for biological behavior, therapeutic responses and treatment outcomes [5]. However, radiomics is also often reliant on disease segmentation to inform disease stratification or treatment outcomes. These applications demand increasing levels of manual region of interest (ROI) delineations which may also be subject to inter- and/or intra-operator variabilities [6], thus driving the rapid development of computer-assisted segmentation technologies to improve consistency.
Traditionally, segmentation is performed manually by radiologists and radiation oncologists, which is time-consuming [7] and it may be associated with inter- and/or intra-operator variabilities [6,8]. In RT, the time required for manual segmentation (MS) is also a rate-limiting step for adaptive radiotherapy (ART). ART is a treatment procedure that aims to account for temporal changes in patient anatomy and, potentially, tumor biology between each therapy fraction [9]. Furthermore, in RT clinics with limited resources and patient capacity, significant delays caused by MS were reported to adversely affect patient admissions as well as overall survival rates [10,11]. Therefore, significant research attention has been directed towards addressing these shortcomings in medical image segmentation.
With remarkable advancements in computer hardware, deep learning (DL) techniques have emerged as potential revolutionary solutions for clinical applications. This is due to their capabilities in learning intricate features from very large medical datasets. Adoption of advanced DL techniques by clinics may lead to significant improvements to current radiological and RT workflows. Computer-assisted segmentation technologies are continuously evolving, providing the necessity for a comprehensive review of the state-of-the-art approaches developed for cancer diagnosis, treatment planning and response monitoring. Although previous publications have provided technical reviews of recent automatic medical image segmentation approaches, [12,13,14,15,16,17] some with a particular focus on radiology [18] and radiation oncology [19,20], few studies have surveyed the clinical value and potential of DL-based segmentation approaches for different types of cancer in the pelvis. In this review, our multidisciplinary team provides an up-to-date overview of the current DL techniques used for pelvic cancer segmentation, pinpoints key achievements and discusses limitations for potential adoption in clinical practice.

2. Background

2.1. What Is Deep Learning?

Artificial intelligence (AI) is the concept and theory behind creating the ability for machines to learn and accomplish human-like intelligence [21]. DL is a sub-category of AI, inspired by the human cognition system. Unlike traditional machine learning (ML) approaches that rely on pre-programmed sets of instructions and manually-curated input data, DL offers the possibility of automatic feature extraction and learning from “raw data”. Whilst many people perceive DL to be a 21st century invention, the first wave of research on how human/animal brains learn, also known as cybernetics, started in the 1940s [22,23]. It was not until 1958 that the first fundamental component of artificial neural networks (ANNs), the perceptron, was developed, and a single-layer architecture was trained [24]. However, after a period of stagnation, the second wave of DL research, connectionism, began in the 1980s–1990s after the introduction of the backpropagation concept [25]. Backpropagation facilitated training of ANNs with one or two hidden layers for the first time. Nevertheless, due to a lack of adequate computational processing power and increased pessimism regarding real-world applications of DL in the mid-1990s, this wave of DL research was also short-lived. The current and third wave began in 2006, with development of convolutional neural networks (CNNs) [26], which allowed algorithms to be trained with significantly more efficiency than the traditional dense architectures (for example, fully-connected networks). A key innovation in this approach was the realization that sharing trained parameters (weights and biases of each perceptron) across the image through a convolution kernel enabled the development of much deeper networks for image processing than the previously available architectures [27]. Today, CNNs play a central role in AI design across a wide range of industries.

2.2. Deep Learning in Oncology

The interpretation of medical images is successfully undertaken by radiologists and radiation oncologists; however, their approach is often subjective and influenced by clinical experience. Depending on prior experience, humans may not be able to fully account for the range of features present on scan images. This limitation can be exacerbated by the variable appearances of tumors in cancer patients. In recent times, AI has shown potential in automatic extraction of complex image features not necessarily visible to the human eye [27].
DL-based approaches have been readily deployed for clinical research since the introduction of CNNs. In oncology, the major applications of DL include tumor characterization (detection, segmentation and staging) [17,28,29,30,31,32,33], clinical outcome prediction [34,35], image synthesis [36,37] and RT dose-response modelling [38,39]. For an in-depth overview of AI applications beyond autosegmentation in radiology and radiation oncology, we refer the readers to previous studies by Boldrini et al. [19] and Meyer et al. [20]. We conducted online search with the keywords “deep learning” and “medical image segmentation” on Google Scholar for studies published between January 2016 and December 2020. The results revealed that the number of studies for DL-based segmentation research in medicine is rapidly rising. A publication search with the additional keyword “cancer” indicated that cancer research has dictated a large proportion of recent DL-based medical image segmentation studies (Figure 1).

2.3. Quantitative Imaging for Cancer Diagnosis, Characterization and Assessment of Treatment Response

MRI is increasingly adopted by radiologists for diagnostic and therapeutic purposes [40,41,42,43]. MRI is especially advantageous for pelvic cancer diagnosis, as its higher contrast-resolution compared with CT facilitates visualization and localization of suspicious lesions, delineation of disease extent, and subsequently enables targeted biopsy [44] and therapy planning [45]. Segmentation of target pelvic organs and tumors can be used to render disease volume, which can be further registered with patient scans from different imaging modalities for treatment planning. Tumor characterization is a broad term, which includes diagnosis, segmentation (differentiating from non-tumor tissues), staging (disease extent) and inferring its biological behavior. These applications may be enhanced by quantifying imaging characteristics such as size, shape and texture.
Tumor size measurement is important as it directs clinical decisions for the choice of treatment and evaluation of treatment response [46,47]. Disease monitoring is essential for assessing response to RT and chemotherapy treatments. The general workflow includes assessment of the tumor across longitudinal scans, and quantitative measurements according to predefined criteria (for example, the Response Evaluation Criteria in Solid Tumors (RECIST), the World Health Organization (WHO) guidelines [48]). However, unidimensional tumor measurements can be limiting, and volumetric assessment may be more robust. In addition, functional MRI techniques can be used to derive quantitative measurements that reflect on different aspects of tumor biology (for instance, DW–MRI). The apparent diffusion coefficient (ADC) is an imaging biomarker related to tissue cellularity and has been shown to be promising for early evaluation of treatment response [49,50].
Radiomic analysis of tumors, a voxel-wise assessment using imaging features derived from CT or MR images or quantitative MRI parametric maps (for example, ADC) has shown promise for evaluating tumor aggressiveness [51] and for prognostic modelling [52]. Radiomics can be used to correlate phenotypical tumor characteristics to diagnostic and/or prognostic factors. However, applications as above are reliant on the accurate segmentation of tumors, which, when undertaken manually, is both laborious and subjective [6,53]. Hence, automated and robust tumor segmentation tools are highly desirable for the rapid quantitative characterization of cancers.

2.4. Radiotherapy Treatment (RT) Planning and Optimization

CT remains the mainstay imaging modality for RT treatment planning due to its high acquisition speed and high spatial resolution, and provides relative electron density information. However, CT lacks the desired soft-tissue contrast for accurate delineation of organs and tumors where electron densities of neighboring structures are not significantly different. Therefore, in radiation oncology, gross tumor volumes (GTVs) are sometimes derived from MRI for more accurate delineations [54]. The examples of GTVs of MRIs and CTs are shown in [55] and [56]. Within a treatment planning system (TPS), the radiation oncologist initially identifies the target volumes and OARs. A series of target volumes are defined according to the criteria reported by the International Commission on Radiation Units and Measurements (ICRU) [57], based on initial tumor identification, expanded to include subclinical disease, and, finally, a planning target volume (PTV) to account for day-to-day setup variation. Consistent identification of these target volumes during treatment using automated segmentation frameworks could help to reduce the expansion margins currently employed, and therefore limit irradiation of normal tissue. Despite defined delineation protocols, inter-observer variation in target delineation is the greatest source of uncertainty, necessitating an additional margin of error to be employed in creating the PTV [58]. Image-guided radiation therapy (IGRT) techniques are increasingly attracting research attention to mitigate these shortcomings and allow clinicians to adapt treatment plans prior to and/or intra-fraction to objectively monitor the position of target volumes. ART is a potentially promising treatment procedure that suits tumor sites with large inter-fraction deformability (for example, bladder, cervix, prostate, rectum); it allows better sparing of the OARs from radiation toxicity. However, the need for redefinition of ROIs for each ART fraction poses a significant limitation in routine treatment workflows. Thus, fast accurate and automatic segmentation of ROIs is considered the central requirement for the adoption of ART in clinical practice.

2.5. Automatic Image Segmentation

Traditional segmentation algorithms were low-level image feature extractors (for example, intensity-based and edge-based). Common methods included intensity thresholding, region growing and edge-detection, which selected semantic image regions solely based on visual information from input images. More advanced mechanisms, such as uncertainty and optimization algorithms, were introduced to overcome the limitations associated with previous heuristic approaches. For instance, deformable models (for instance, active contours [59], level-set algorithms [60]) were developed to allow contours to expand/contract to include distinctive regions. Graph-based methods (for instance, graph cuts [61], watershed algorithm [62]) applied the principles of game theory for segmentations based on inter-voxel relationships. Probability-based algorithms (for example, Bayesian classifier [63,64], Gaussian mixture models, clustering, k-nearest neighbor [65], ANNs) were developed to automatically assign individual voxels to different classes. However, these approaches lacked contextual information, which led to suboptimal segmentations. Although these algorithms can be combined with Markov random field models to alleviate this drawback [66], the success of these techniques is strongly correlated with manual human interactions. Atlas-based approaches were proposed to incorporate prior knowledge in segmentation algorithms. Early atlas-based algorithms consisted of a single atlas (a manually defined set of regions on an existing reference image dataset) from which the contours from the reference image were transferred to the new image following deformable registration [67]. However, segmentation heavily relied on registration accuracy and organ morphology, leading to suboptimal contours, especially for patients with unusual anatomy.
Later approaches proposed the use of more advanced atlas selection techniques [68,69], selection of an atlas containing average patient anatomy information [70] and multi-atlas segmentation as prior knowledge [67,71]. Currently, multi-atlas algorithms are the most common techniques used in defining target tumor volumes [72]. Nonetheless, the major limitations with atlas-based methods remain the considerable computational and time constraints. Currently, an array of software programs is available for automatic registration and segmentation of tumors using pre-defined templates and deformable contour propagations [73,74]. However, these programs are not suitable for pelvic cancers due to unclear boundaries between the gross tumor and subclinical malignant regions [75]; tumor contouring heavily relies on clinicians’ experience.
DL-based segmentation methods have shown enormous potential in computer-assisted clinical applications due to their ability to learn complex information from very large datasets. Unlike traditional auto-segmentation approaches that rely on human-defined heuristics, CNNs are able to automatically capture the pertinent information contained within existing (training) datasets needed for successful segmentation. CNNs are generally formed by stacking several layers (for example, convolutional/deconvolutional, fully-connected, pooling, upsampling layers), each of which perform a key operation on the input images (See Figure 2a for a basic CNN classification architecture). Conventionally, CNNs performed pixel/voxel-wise classifications to isolate independent pixels/voxels in order to form ROIs from images. However, this was computationally inefficient due to repetitive iterations of identical convolutional operations throughout images. In 2015, Long et al. [76] introduced fully-convolutional networks (FCNs) to mitigate the limitations with fully-connected layers (final set of layers in CNN) for extracting local spatial correlations. The FCN architecture includes symmetrical encoding and decoding paths which enable learning of both low- and high-level feature representations in images (Figure 2b). One of the most popular DL architectures used for medical image segmentation is U-Net [77], which is a special type of an FCN with the addition of skip connection pathways between encoders and decoders (Figure 2c). In recent years, many variations of U-Net and FCNs have been published to enhance segmentation performance across a wide range of medical applications. Typical examples include 3D U-Net [78], V-Net [79], DeepMedic [80] and DeepLab [81]. We direct the readers to [12,14,18,82] for comprehensive technical overviews of the DL architectures used in recent medical research.

Evaluating the Quality and Success of Segmentation

One of the most broadly-used metrics for comparing automatically-generated contours with the ground-truth is the Dice similarity coefficient (DSC) [83]. DSC evaluates the overlap between two sets of contours (A and B) divided by their mean area. DSC ranges from 0 to 1, where higher values correspond to more accurate segmentation results (Equation (1)). It considers both false positives and false negatives; therefore, it is superior to accuracy which only incorporates correctly-identified pixels/voxels in images. Another variation of DSC reported in the literature is the surface Dice similarity coefficient (SDSC) [84] that, with the addition of parameter τ, incorporates inter-observer variabilities in measuring the overlap between two surfaces. Intersection-over-union (IoU) or Jaccard index (JI) is another segmentation metric reported in the literature [85] (Equation (2)).
DSC = 2 | A B |   | A | + | B |
IoU = A B   A B
One limitation associated with volume-based segmentation evaluation metrics (for instane, DSC, IoU) is the lack of sensitivity to the boundary of contours with potential spatial co-location. This is especially important in radiation oncology, where the contours of adjacent organs/target disease volumes may signify the difference between irradiated and at-risk regions. Therefore, distance-based metrics are used as additional indicators to assess segmented contours. The Hausdorff distance (HD) [86] is defined as follows (Equations (3) and (4)):
HD ( A , B ) =   max ( h ( A , B ) , h ( B , A ) )
h ( A , B ) = max b B ( min | | a A a b | | )
where h(A,B) is the largest distance from a point in A to the nearest point in B.
HD is generally inversely correlated with segmentation accuracy. Additionally, the mean surface distance (MSD) is Equation (5):
MSD = 1 | A | + | B | ( a A min   b B d ( a , b ) + b B min   a A d ( b , a ) )
where d(a,b) corresponds to the distance between points a and b.
In the following sections, we review DL-based segmentation publications for different cancer types within the pelvis.

3. Literature Review

The literature review in this study was conducted by an initial article search in PubMed/Medline and ScienceDirect databases with the keywords “deep learning”, “segmentation”, “cancer”, “organs at risk”, “radiation oncology”, “radiology” and “radiotherapy”, and a subsequent manual reference check of the relevant publications. This approach aimed to create a clinically-oriented overview of the DL-based pelvic segmentation algorithms currently used in pelvic cancers. The exclusion criteria for the retrieved publications were as follows:
  • non-DL segmentation techniques;
  • segmentation applied to sites other than the pelvis;
  • no training/validation of methods on real patient data;
  • image modalities used other than CT and MRI;
  • full articles published in languages other than English;
  • no clinical application focus or published outcome
Overall, we included 74 relevant studies on bladder, cervical, prostate and rectal cancer segmentation applications to present a comprehensive review of the state-of-the-art approaches.

3.1. Bladder Cancer

Segmentation of the inner and outer bladder wall and tumors on MRI plays an important role in the diagnosing and staging of urinary bladder cancer, as it provides excellent soft-tissue visualizations. On CT, bladder disease segmentation can provide clinicians with insight on cancer tumor progression and treatment response monitoring [87,88]. Bladder segmentation on MRI is a challenging task due to large inter-patient anatomical variations as well as imaging signal inhomogeneities in the urine caused by motion artefacts and unclear soft-tissue boundaries [89,90]. The difficulty of segmentation increases with the presence of cancer in the bladder. Previous studies performed automatic bladder segmentation using adaptive Markov random field [91], adaptive shape prior constrained level set [92] and statistical shape-based algorithms [33]. However, a lack of generalizability due to large anatomical discrepancies in patient populations and the need for manual feature and parameter selection prevented their widespread clinical adoption.
To overcome this limitation, Ma et al. [88] developed a U-Net that improved bladder segmentation on CT compared with their previous combined CNN and level-set segmentation algorithm [93], particularly in lower-resolution images and scans from patients with locally-advanced urinary bladder cancer. However, the authors reported that contrast-enhanced CT images added more complexity to segmentation due to the variable appearance of the bladder based on the effects of urine motion and filling from excreted contrast material. Xu et al. [94] proposed a 3D bladder segmentation framework on CT involving a fully-connected conditional random fields recurrent neural network (CRF–CNN) and fine-localized bladder probability maps; they reported that their approach outperformed the state-of-the-art V-Net algorithm for volumetric segmentation of the bladder. On the other hand, only the study published by Dolz et al. [95] incorporated DL for bladder cancer segmentation on MRI. The authors developed a U-Net to perform multi-region semantic bladder segmentation and reported that this approach outperformed traditional non-DL autosegmentation techniques. We hypothesize that the paucity of published studies for use of DL in bladder cancer segmentation may be due to the lack of public and annotated datasets, as well as the lower prevalence of the disease compared with other pelvic cancers (see Table 1 and Figure 3).

3.2. Cervical Cancer

Segmentation of cervical tumors remains a challenging task due to large geometrical variations in patient populations and indistinctive soft-tissue boundaries. Previous studies have reported the utility of DW–MRI and ADC for cervical cancer staging, histological grading and nodal status evaluations [158]. Despite growing interest in quantitative assessment of tumors in radiology, to date, only one previous study, by Lin et al. [17], incorporated the use of DL for automatic segmentation and radiomic feature extractions of cervical tumors from ADC maps. The authors demonstrated that their framework outperformed previous ML techniques by a factor of two, potentially providing clinicians with an automated tool to minimize tumor delineation (GTV equivalent) discrepancies. Moreover, Breto et al. [102] developed a Mask R–CNN framework for automatic segmentation of OARs and GTVs for MR-only RT treatment planning for patients with locally advanced cervical cancer. The authors reported that while the generated contours for the cervix, rectum, bladder, uterus, femur and sigmoid were in good agreement with expert MS, their network underperformed for segmenting smaller and less distinctive soft-tissue structures such as the vagina, parametrium and the mesorectum. However, their results were only based on five test patients and not clinically validated. The considerable segmentation complexities in cervical cancer as well as the lack of high-quality and annotated databases may have also contributed to the low numbers of studies for DL-based segmentation of cervical tumors on MRI (Table 1).
In the RT literature, Wang et al. [99] proposed a 3D U-Net model for clinical target volume (CTV), which typically encompasses the tumor, cervix, uterus, ovaries and parametria, and OAR delineations on CT from 25 patients, and suggested that their automatic contours were as accurate as MS performed by a clinical resident with 8 months’ experience. Liu et al. [97] developed a 3D U-Net architecture for segmentation of OARs and reported that over 90% of their generated contours were “highly acceptable” for RT planning through expert oncologist evaluation (>15 years of experience). However, this network underperformed for CTV delineations. In a later study, the authors developed a dual-path U-Net network (DpnUNet) consisting of more hidden layers in order to make it more suitable for CTV segmentations where tissue boundaries are unclear. However, despite promising segmentation results, their framework was only evaluated on patient scans from a single institution. In contrast, Rhee et al. [101] used a V-Net [79] model to generate CT treatment plans and reported that their algorithm achieved on average 80%, 97% and 90% clinical acceptance rates for primary CTVs, OARs and bony structures, respectively. Their framework was validated on 30 cervical cancer patients scanned across three hospitals. The list of the publications for cervical cancer segmentation studies is shown in Table 1.

3.3. Prostate Cancer

Previous review studies have investigated various automatic segmentation approaches. However, only one previous study, published by Almeida and Tavares [16], provided a systematic review of advances in prostate segmentation, and included 28 publications for studies until 2019 (CT: 9, MRI: 19). This study provides an up-to-date review of 52 publications on prostate and/or prostate cancer segmentation (CT: 12, MRI: 40) (see Table 1). Based on our literature search, it is apparent that in recent years, the clinical attention on segmentation of prostate cancers has gravitated towards MRI due to its unparalleled soft-tissue contrast. There remains limited literature for automatic segmentation of prostate cancers themselves, in part because of the technical challenges imposed by the relatively small size of the tumors, background changes within the prostate gland also because major treatments (for example, RT) are usually directed towards the whole prostate gland rather than the focal disease. However, as automated decision support tools for prostate cancer diagnosis in MRI are being developed, together with internal radiation boost for prostate cancer and other focal therapies becoming more widely used, prostate cancer segmentation will become increasingly important.
At present, whole prostate gland (WG), central gland (CG), transition zone (TZ) and peripheral zone (PZ) segmentations have been developed to aid disease assessment and prostate cancer staging [159]. WG segmentation is also the basis for RT planning. Earlier prostate zonal segmentation algorithms included active appearance [160], continuous max-flow [161] and C-means algorithms [162]. However, these techniques failed to generalize to patient populations from multiple institutions. Due to high clinical demand and technology advancement, DL rapidly found its way into prostate segmentation research. Amongst the MRI-based prostate segmentation studies in our review, 33 studies performed segmentation of WG. However, from these publications, only eight studies also investigated CG, TZ and PZ segmentations [115,120,121,125,126,127,134,147]. In these studies, WG segmentation accuracy was superior to PZ and TZ due to large anatomical variations and indistinguishable soft-tissue boundaries. Moreover, only four studies provided results on prostate cancer segmentation on MRI [117,125,134,145] (see Table 1).
From the 40 reviewed MRI-based prostate segmentation publications, 32 and 4 used 2D and 3D imaging data for training their DL networks, respectively, whilst one study used a combination of 2D and 3D input MRI to train their segmentation algorithms. Additionally, the MR imaging acquisition mode was unspecified for one or all MRI contrasts in three studies. Although using volumetric images for training incorporates vital spatial information for organs, it requires considerable computational resources to facilitate training. One advantage of training DL algorithms with 2D convolutional kernels is the ability to use knowledge transfer (transfer learning) from previous models trained on natural images in order to achieve greater segmentation performance. Tian et al. [29] proposed a variant of FCN called PSNet, and through transfer learning, achieved satisfactory results. Zhu et al. [144] developed a CNN with deep supervision to better capture multi-level feature maps. Attempting to investigate the performance of generative adversarial networks (GANs), Birbiri et al. [116] proposed a conditional GAN (cGAN) and reported that their algorithm with a U-Net generator outperformed the standalone U-Net model. On the other hand, benefiting from volumetric model training, Milletari et al. [79] developed a 3D CNN called V-Net to perform prostate gland segmentation. Feng et al. [137] used a multi-task FCN for training in a semi-supervised manner to overcome lack of adequate training data. Zhu et al. [118] proposed a boundary-weighted strategy to enforce feature learning at the base and apex of the prostate from a limited training dataset.
The considerable difficulty in automatic delineation of pelvic organs have inspired the introduction of various segmentation challenges. These include PROMISE12 [163], ASPS13 [164] and PROSTATEx [165]. Amongst the reviewed articles in this study, 28 publications used public datasets for network training and/or validation. For example, Yu et al. [166] developed a 3D CNN with mixed long and short residual connections that enabled high training efficiency and superior feature learning capability from small training datasets. This framework outperformed other proposed algorithms in the PROMISE12 challenge in 2018. Moreover, Brosch et al. [139] developed a framework containing regression-based boundary detection and CNN-based prediction of the distance between a surface mesh and its associated boundary point which ranked first place in the PROMISE12 challenge in 2019. Geng et al. [124] proposed an encoder-decoder architecture with dense dilated pyramidal pooling, and, after validating their technique on PROMISE12 and ASPS13 datasets, reported that their framework outperformed the then state-the-of-art algorithms for segmentation. Dai et al. [117] developed a region-based CNN (Mask R–CNN) and suggested that their approach was able to perform end-to-end segmentation of the prostate as well as the highly suspicious lesions from the PROSTATEx repository. Based on our literature research, it is evident that the introduction of segmentation challenges along with public and annotated databases for prostate cancer have encouraged research from the wider ML community. The list of available databases and publications for prostate segmentation are shown in Table 2.
Traditionally, OARs and segmentation for RT planning in prostate cancer were performed using volumetric deformable model surface [170], organ-specific modelling [171] and atlas-based techniques [74]. However, contouring through these techniques was poor for patients with abnormal anatomy and data from external institutions, hence hindering the possibility of their integration for online adaptive treatments. Therefore, recent studies have employed DL-based algorithms to develop more efficient, generalizable and consistent segmentation pipelines. The current RT planning workflow uses CT for ROI contouring and radiation dose estimations. Hence, despite poor soft-tissue contrast, segmentation on CT remains desirable. Ma et al. [31] proposed a framework combining a 2D CNN with multi-atlas label fusion to segment ROIs on CT. Balagopal et al. [112] used a 2D–3D hybrid U-Net model containing aggregated residual networks (ResNeXt) to enhance algorithm feature learning capability, and achieved an average DSC of 0.9. However, this was only based on ground-truth data defined by only one expert. Wang et al. [107] proposed a 3D FCN with boundary sensitive representations for enhanced organ-specific feature learning and verified their results based on data from 313 patients, acquired from multiple CT scanners. On the other hand, Dong et al. [106] used a Cycle Consistent Generative Adversarial Network (Cycle-GAN) to generate synthetic MRI from CT to enhance their algorithm’s soft-tissue learning capability. However, the impact of registration for contour propagations from MRI to CT was not reported. MRI-only RT planning was also proposed to mitigate these geometrical uncertainties. To the best of our knowledge, there are no public CT databases for prostate segmentation and RT planning.

3.4. Rectal Cancer

MRI is the technique of choice for the diagnosis and preoperative staging of rectal cancer [172]. MRI is more accurate in the diagnosis, staging and treatment planning of rectal cancer compared with CT, and also provides quantitative tumor assessment, which can inform treatment response assessment and disease outcomes [173]. Although in recent years, numerous studies were published for automatic contouring of pelvic tumors [101,174,175,176,177], only a few reported to address rectal cancer [32,152,178]. Based on our article search, nine studies incorporated DL for rectal cancer segmentation applications (CT: 2, MRI: 6, MRI/CT: 1) (Table 1). Trebeschi et al. [157] published the first CNN-based rectal tumor segmentation study on multi-parametric MRI. Their framework included classification of fixed patches and segmentation of the identified voxels. Although this approach was designed to reduce image redundancy, it ignored context information which adversely affected their network’s generalizability in cross-institution model evaluations. Huang et al. [156] developed a volumetric hybrid loss fully-convolutional network (HL-FCN) that used Dice-based loss to overcome class imbalance in their training data, however their results were not clinically evaluated. Jian et al. [28] proposed an FCN-based segmentation framework and used transfer learning to outperform the conventional U-Net architecture for rectal tumor segmentation on MRI. Similarly, Wang et al. [154] deployed an FCN model from a pre-trained ResNet50 model to enrich hierarchical feature extraction during network training. The authors evaluated their results on 107 patients from four centers and reported that their network was superior than U-Net for tumor contouring. Unfortunately, due to a shortage of public databases, direct and meaningful comparison of these algorithms for rectal cancer segmentation remains a challenging task.
To date, only three studies were published on uses of DL for rectal cancer RT treatment planning on CT images. Men et al. [152] proposed a 2D CNN with dilated convolutions and suggested that their network outperformed the traditional U-Net architecture. However, the authors reported that their model failed to accurately perform colon and intestine segmentations due to large inter-patient anatomical variabilities and inhomogeneous distribution of the contrast material and gas in these structures. Song et al. [32] investigated DeepLabV3+ and ResU-Net architectures for OARs and CTV segmentations, and suggested that while automatic contouring using these models outperformed the framework proposed by Men et al. [152], they offered different advantages for feature extraction and contouring of pelvic structures. While ResU-Net was reported to be an effective algorithm for segmenting visually distinctive structures (for example, femoral heads, bones), DeepLabV3+ achieved superior segmentation performances for soft tissues with unclear boundaries (for example, bladder/small intestine). Their results were in line with a later study by Men et al. [151], who employed cascaded convolutions along with spatial pyramid pooling (SPP) to enhance CTV delineations. However, both of these techniques were based on 2D training that disregards the inter-slice spatial information of OARs and tumor volumes for training.

4. Discussion

Significant research attention has recently shifted towards bridging the gap between computer vision and patient care. In this review, we presented an overview of the recent DL-based automatic segmentation algorithms used in bladder, cervical, prostate and rectal cancers from 74 studies. We included studies that incorporated in their DL-based analyses the use of input CT and/or MR images. CT is widely used as the desired imaging modality for radiation dose estimations and RT treatment planning. However, the inadequate soft-tissue contrast on CT necessitates the concurrent adoption of MRI for enhanced visualization of pelvic structures to improve the accuracy of tumor definition, leading to potential segmentation uncertainties caused by mis-registration. On the other hand, the major limitation with cancer tumor segmentation on MRI remains the difficulty in confidently identifying abnormal structures from healthy tissues. This is due to highly variable inter-patient geometrical appearance and potentially poorly-defined soft-tissue boundaries.
Unfortunately, unlike DL applications for natural images, access to medical images for training and evaluating algorithms is restricted. This limitation is largely due to patient data privacy and labor-intensive ground-truth contour definitions. Difficulty in accessing high-quality and adequately large in-house repositories may hinder research motivation from the wider ML community. We demonstrated, through comprehensive literature review, that, although partially due to higher prostate cancer prevalence, the introduction of grand MRI segmentation challenges and publicly-accessible datasets have played an important role in driving prostate cancer research forward. Regrettably, to the best of our knowledge, there are no public and annotated repositories for other pelvic cancer types (MRI or CT). Therefore, global and institutional efforts are necessary to initiate public datasets to encourage future widespread research. However, appropriate quality control and external expert auditing need to be in place to ensure data are of high quality [179,180].
Lack of common datasets also creates difficulty in fairly and accurately comparing new DL algorithms with previous research studies. Based on the reviewed articles, the MRI acquisition mode (2D or 3D) for five studies were labelled as “unspecified’ since insufficient acquisition information was provided for training MR images. Whilst DL network dimensionality and architecture selection are important for the success of automatic segmentation algorithms, the understanding of input data as well as the reproducibility of network outcome are of great significance. Researchers routinely use quantitative segmentation evaluation metrics such as DSC and HD to compare their results with other proposed algorithms. Although it may be tempting to rely on these measures to draw definitive conclusions on one algorithm’s performance over another, qualitative assessment of results by experts is also necessary to ensure fair judgement and that the clinical demands are met. A few studies incorporated qualitative evaluations to assess the clinical acceptance rate of generated contours [101]; however, this step is not yet widely undertaken for most pelvic cancer segmentation applications.
The generalizability of DL algorithms can be enhanced by use of multi-vendor patient scans for training; however, differences in institutional MR imaging protocols may adversely affect segmentation performance. Contour definition by experts with varying clinical experience (radiologist vs. radiation oncologist) and the source of training data (single- vs. multi-center) are other contributing factors to variabilities in ground-truth ROI delineations which can confound segmentation performance.
The DL-based segmentation publications reviewed in this study proposed improvements in network architectures, image processing techniques, use of multi-parametric input data, loss functions, use of pretrained models (transfer learning) and adversarial training. The fields of DL, particularly computer vision and image segmentation, are still evolving. The industry/application-specific requirements continually encourage innovation and the development of sophisticated networks. The future outlook for pelvic cancer segmentation may include intricate knowledge transfer from pre-trained models on very large datasets or perhaps adaption of key developments from non-medical applications [181] or ones not yet configured for the pelvis [182,183]. The examples of this may include explainable/interpretable AI, domain adaptation and continuous and/or federated learning.
In conclusion, DL in the eyes of clinicians, is still seen as a “black box algorithm” due to its limited interpretability for predicted outcome. Therefore, the clinical adoption of AI-based frameworks is hindered by their lack of interpretability and explainability when generating inaccurate outcomes. Although DL is a powerful and promising tool for many supervised computer-aided applications, it heavily relies on the quality of input data for training. With the absence of standardized and international contouring consensus guidelines to reduce segmentation variabilities, and lack of accessible and annotated public databases, there remains a formidable challenge for true investigation of novel segmentation techniques against existing algorithms. Our review demonstrated the challenges; incentives and public datasets can lead to research contribution from groups from different domains and considerable advancements in technology. Lastly, while embracing the exciting future of DL as a catalyst for a paradigm shift in disease detection, characterization and treatment planning, researchers and clinicians should be aware of the current shortcomings and requirements of automatic pelvic segmentation algorithms in order to push the boundaries of AI in healthcare.

Author Contributions

Conceptualization, R.K., J.M.W. and D.-M.K.; Writing—Original Draft Preparation, R.K.; Supervision, G.L., J.M.W., C.M., S.L., M.D.B. and D.-M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This project represents independent research funded by the National Institute for Health Research (NIHR) Biomedical Research Centre and the Clinical Research Facilities at The Royal Marsden NHS Foundation Trust and the Institute of Cancer Research, London, United Kingdom. The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. Gigin Lin received research funding from the Ministry of Science and Technology Taiwan (MOST 110-2628-B-182A-018).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the review, writing of the manuscript or the decision to publish.

References

  1. Parekh, V.S.; Jacobs, M.A. Deep learning and radiomics in precision medicine. Expert Rev. Precis. Med. Drug Dev. 2019, 4, 59–72. [Google Scholar] [CrossRef] [Green Version]
  2. Ashley, E.A. Towards precision medicine. Nat. Rev. Genet. 2016, 17, 507–522. [Google Scholar] [CrossRef]
  3. Malayeri, A.A.; El Khouli, R.H.; Zaheer, A.; Jacobs, M.A.; Corona-Villalobos, C.P.; Kamel, I.R.; Macura, K.J. Principles and Applications of Diffusion-weighted Imaging in Cancer Detection, Staging, and Treatment Follow-up. Radiographics 2011, 31, 1773–1791. [Google Scholar] [CrossRef] [Green Version]
  4. Ma, D.; Gulani, V.; Seiberlich, N.; Liu, K.; Sunshine, J.L.; Duerk, J.L.; Griswold, M.A. Magnetic resonance fingerprinting. Nat. Cell Biol. 2013, 495, 187–192. [Google Scholar] [CrossRef] [Green Version]
  5. O’Connor, J.P.B.; Aboagye, E.; Adams, J.E.; Aerts, H.J.W.L.; Barrington, S.F.; Beer, A.J.; Boellaard, R.; Bohndiek, S.; Brady, M.; Brown, G.; et al. Imaging biomarker roadmap for cancer studies. Nat. Rev. Clin. Oncol. 2017, 14, 169–186. [Google Scholar] [CrossRef]
  6. Nelms, B.E.; Tomé, W.; Robinson, G.; Wheeler, J. Variations in the Contouring of Organs at Risk: Test Case From a Patient With Oropharyngeal Cancer. Int. J. Radiat. Oncol. 2012, 82, 368–378. [Google Scholar] [CrossRef]
  7. Miles, E.A.; Clark, C.H.; Urbano, M.T.G.; Bidmead, M.; Dearnaley, D.P.; Harrington, K.J.; A’Hern, R.; Nutting, C.M. The impact of introducing intensity modulated radiotherapy into routine clinical practice. Radiother. Oncol. 2005, 77, 241–246. [Google Scholar] [CrossRef]
  8. Brouwer, C.L.; Steenbakkers, R.J.H.M.; Heuvel, E.V.D.; Duppen, J.C.; Navran, A.; Bijl, H.P.; Chouvalova, O.; Burlage, F.R.; Meertens, H.; Langendijk, J.A.; et al. 3D Variation in delineation of head and neck organs at risk. Radiat. Oncol. 2012, 7, 32. [Google Scholar] [CrossRef] [Green Version]
  9. Boldrini, L.; Cusumano, D.; Cellini, F.; Azario, L.; Mattiucci, G.C.; Valentini, V. Online adaptive magnetic resonance guided radiotherapy for pancreatic cancer: State of the art, pearls and pitfalls. Radiat. Oncol. 2019, 14, 71. [Google Scholar] [CrossRef]
  10. Mikeljevic, J.S.; Haward, R.; Johnston, C.; Crellin, A.; Dodwell, D.; Jones, A.; Pisani, P.; Forman, D. Trends in postoperative radiotherapy delay and the effect on survival in breast cancer patients treated with conservation surgery. Br. J. Cancer 2004, 90, 1343–1348. [Google Scholar] [CrossRef]
  11. Chen, Z.; King, W.; Pearcey, R.; Kerba, M.; Mackillop, W.J. The relationship between waiting time for radiotherapy and clinical outcomes: A systematic review of the literature. Radiother. Oncol. 2008, 87, 3–16. [Google Scholar] [CrossRef]
  12. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [Green Version]
  13. Cardenas, C.E.; Yang, J.; Anderson, B.M.; Court, L.E.; Brock, K.B. Advances in Auto-Segmentation. Semin. Radiat. Oncol. 2019, 29, 185–197. [Google Scholar] [CrossRef]
  14. Haque, I.R.I.; Neubert, J. Deep learning approaches to biomedical image segmentation. Inform. Med. Unlocked 2020, 18, 100297. [Google Scholar] [CrossRef]
  15. Zhou, T.; Ruan, S.; Canu, S. A review: Deep learning for medical image segmentation using multi-modality fusion. Array 2019, 3–4, 100004. [Google Scholar] [CrossRef]
  16. Almeida, G.; Tavares, J.M.R. Deep Learning in Radiation Oncology Treatment Planning for Prostate Cancer: A Systematic Review. J. Med. Syst. 2020, 44, 179. [Google Scholar] [CrossRef]
  17. Lin, Y.-C.; Lin, C.-H.; Lu, H.-Y.; Chiang, H.-J.; Wang, H.-K.; Huang, Y.-T.; Ng, S.-H.; Hong, J.-H.; Yen, T.-C.; Lai, C.-H.; et al. Deep learning for fully automated tumor segmentation and extraction of magnetic resonance radiomics features in cervical cancer. Eur. Radiol. 2020, 30, 1297–1305. [Google Scholar] [CrossRef]
  18. Ueda, D.; Shimazaki, A.; Miki, Y. Technical and clinical overview of deep learning in radiology. Jpn. J. Radiol. 2019, 37, 15–33. [Google Scholar] [CrossRef]
  19. Boldrini, L.; Bibault, J.-E.; Masciocchi, C.; Shen, Y.; Bittner, M.-I. Deep Learning: A Review for the Radiation Oncologist. Front. Oncol. 2019, 9, 977. [Google Scholar] [CrossRef] [Green Version]
  20. Meyer, P.; Noblet, V.; Mazzara, C.; Lallement, A. Survey on deep learning for radiotherapy. Comput. Biol. Med. 2018, 98, 126–146. [Google Scholar] [CrossRef]
  21. Kowalski, R. Computational Logic and Human Thinking: How to Be Artificially Intelligent; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  22. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Wiley: New York, NY, USA, 1949. [Google Scholar]
  23. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biol. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  24. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [Green Version]
  25. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  26. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, Q.-S.; Zhu, S.-C. Visual interpretability for deep learning: A survey. Front. Inf. Technol. Electron. Eng. 2018, 19, 27–39. [Google Scholar] [CrossRef] [Green Version]
  28. Jian, J.; Xiong, F.; Xia, W.; Zhang, R.; Gu, J.; Wu, X.; Meng, X.; Gao, X. Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images. Australas. Phys. Eng. Sci. Med. 2018, 41, 393–401. [Google Scholar] [CrossRef]
  29. Tian, Z.; Liu, L.; Zhang, Z.; Fei, B. PSNet: Prostate segmentation on MRI based on a convolutional neural network. J. Med. Imaging 2018, 5, 021208. [Google Scholar] [CrossRef]
  30. Tian, Z.; Liu, L.; Fei, B. Deep convolutional neural network for prostate MR segmentation. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1687–1696. [Google Scholar] [CrossRef]
  31. Ma, L.; Guo, R.; Zhang, G.; Tade, F.; Schuster, D.M.; Nieh, P.; Master, V.; Fei, B. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion. Proc. SPIE Int. Soc. Opt. Eng. 2017, 10133, 101332O. [Google Scholar]
  32. Song, Y.; Hu, J.; Wu, Q.; Xu, F.; Nie, S.; Zhao, Y.; Bai, S.; Yi, Z. Automatic delineation of the clinical target volume and organs at risk by deep learning for rectal cancer postoperative radiotherapy. Radiother. Oncol. 2020, 145, 186–192. [Google Scholar] [CrossRef]
  33. Chai, X.; van Herk, M.; Betgen, A.; Hulshof, M.C.; Bel, A. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model. Phys. Med. Biol. 2012, 57, 3945–3962. [Google Scholar] [CrossRef] [PubMed]
  34. Gulliford, S.L.; Webb, S.; Rowbottom, C.; Corne, D.W.; Dearnaley, D.P. Use of artificial neural networks to predict biological outcomes for patients receiving radical radiotherapy of the prostate. Radiother. Oncol. 2004, 71, 3–12. [Google Scholar] [CrossRef] [PubMed]
  35. Kim, D.W.; Lee, S.; Kwon, S.; Nam, W.; Cha, I.-H.; Kim, H.J. Deep learning-based survival prediction of oral cancer patients. Sci. Rep. 2019, 9, 6994. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Han, X. MR-based synthetic CT generation using a deep convolutional neural network method. Med. Phys. 2017, 44, 1408–1419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Nie, D.; Cao, X.; Gao, Y.; Wang, L.; Shen, D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In Design, User Experience, and Usability: Design Thinking and Methods; Springer: Cham, Switzerland, 2016; pp. 170–178. [Google Scholar] [CrossRef] [Green Version]
  38. Zhen, X.; Chen, J.; Zhong, Z.; Hrycushko, B.; Zhou, L.; Jiang, S.; Albuquerque, K.; Gu, X. Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: A feasibility study. Phys. Med. Biol. 2017, 62, 8246–8263. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Ma, M.; Kovalchuk, N.; Buyyounouski, M.K.; Xing, L.; Yang, Y. Incorporating dosimetric features into the prediction of 3D VMAT dose distributions using deep convolutional neural network. Phys. Med. Biol. 2019, 64, 125017. [Google Scholar] [CrossRef] [PubMed]
  40. Soni, P.; Maturen, K.; Prisciandaro, J.; Zhou, J.; Cao, Y.; Balter, J.; Jolly, S. Using MRI to Characterize Small Anatomic Structures Critical to Pelvic Floor Stability in Gynecologic Cancer Patients Undergoing Radiation Therapy. Int. J. Radiat. Oncol. 2015, 93, E608. [Google Scholar] [CrossRef]
  41. Colosio, A.; Soyer, P.; Rousset, P.; Barbe, C.; Nguyen, F.; Bouché, O.; Hoeffel, C. Value of diffusion-weighted and gadolinium-enhanced MRI for the diagnosis of pelvic recurrence from colorectal cancer. J. Magn. Reson. Imaging 2014, 40, 306–313. [Google Scholar] [CrossRef] [PubMed]
  42. Nam, E.J.; Yun, M.; Oh, Y.T.; Kim, J.W.; Kim, S.; Jung, Y.W.; Kim, S.W.; Kim, Y.T. Diagnosis and staging of primary ovarian cancer: Correlation between PET/CT, Doppler US, and CT or MRI. Gynecol. Oncol. 2010, 116, 389–394. [Google Scholar] [CrossRef] [PubMed]
  43. Fütterer, J.J.; Briganti, A.; De Visschere, P.; Emberton, M.; Giannarini, G.; Kirkham, A.; Taneja, S.S.; Thoeny, H.; Villeirs, G.; Villers, A. Can Clinically Significant Prostate Cancer Be Detected with Multiparametric Magnetic Resonance Imaging? A Systematic Review of the Literature. Eur. Urol. 2015, 68, 1045–1053. [Google Scholar] [CrossRef] [PubMed]
  44. Valerio, M.; Donaldson, I.; Emberton, M.; Ehdaie, B.; Hadaschik, B.; Marks, L.S.; Mozer, P.; Rastinehad, A.R.; Ahmed, H.U. Detection of Clinically Significant Prostate Cancer Using Magnetic Resonance Imaging–Ultrasound Fusion Targeted Biopsy: A Systematic Review. Eur. Urol. 2015, 68, 8–19. [Google Scholar] [CrossRef] [PubMed]
  45. Muller, B.G.; Fütterer, J.J.; Gupta, R.T.; Katz, A.; Kirkham, A.; Kurhanewicz, J.; Moul, J.W.; Pinto, P.A.; Rastinehad, A.R.; Robertson, C.; et al. The role of magnetic resonance imaging (MRI) in focal therapy for prostate cancer: Recommendations from a consensus panel. BJU Int. 2014, 113, 218–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Eldred-Evans, D.; Tam, H.; Smith, A.P.T.; Winkler, M.; Ahmed, H.U. Use of Imaging to Optimise Prostate Cancer Tumour Volume Assessment for Focal Therapy Planning. Curr. Urol. Rep. 2020, 21, 30. [Google Scholar] [CrossRef] [PubMed]
  47. Mazaheri, Y.; Hricak, H.; Fine, S.W.; Akin, O.; Shukla-Dave, A.; Ishill, N.M.; Moskowitz, C.S.; Grater, J.E.; Reuter, V.E.; Zakian, K.L.; et al. Prostate Tumor Volume Measurement with Combined T2-weighted Imaging and Diffusion-weighted MR: Correlation with Pathologic Tumor Volume. Radiology 2009, 252, 449–457. [Google Scholar] [CrossRef] [PubMed]
  48. Jaffe, C.C. Measures of Response: RECIST, WHO, and New Alternatives. J. Clin. Oncol. 2006, 24, 3245–3251. [Google Scholar] [CrossRef]
  49. Padhani, A.; Liu, G.; Mu-Koh, D.; Chenevert, T.L.; Thoeny, H.C.; Takahara, T.; Dzik-Jurasz, A.; Ross, B.D.; Van Cauteren, M.; Collins, D.; et al. Diffusion-Weighted Magnetic Resonance Imaging as a Cancer Biomarker: Consensus and Recommendations. Neoplasia 2009, 11, 102–125. [Google Scholar] [CrossRef] [Green Version]
  50. Lin, Y.-C.; Lin, G.; Hong, J.-H.; Lin, Y.-P.; Chen, F.-H.; Ng, S.-H.; Wang, C.-C.; Bsc, Y.-P.L. Diffusion radiomics analysis of intratumoral heterogeneity in a murine prostate cancer model following radiotherapy: Pixelwise correlation with histology. J. Magn. Reson. Imaging 2017, 46, 483–489. [Google Scholar] [CrossRef]
  51. Schob, S.; Meyer, H.J.; Pazaitis, N.; Schramm, D.; Bremicker, K.; Exner, M.; Höhn, A.K.; Garnov, N.; Surov, A. ADC Histogram Analysis of Cervical Cancer Aids Detecting Lymphatic Metastases—A Preliminary Study. Mol. Imaging Biol. 2017, 61, 69–962. [Google Scholar] [CrossRef]
  52. Lin, G.; Yang, L.-Y.; Lin, Y.-C.; Huang, Y.-T.; Liu, F.-Y.; Wang, C.-C.; Lu, H.-Y.; Chiang, H.-J.; Chen, Y.-R.; Wu, R.-C.; et al. Prognostic model based on magnetic resonance imaging, whole-tumour apparent diffusion coefficient values and HPV genotyping for stage IB-IV cervical cancer patients following chemoradiotherapy. Eur. Radiol. 2018, 29, 556–565. [Google Scholar] [CrossRef]
  53. Thiesse, P.; Ollivier, L.; Di Stefano-Louineau, D.; Négrier, S.; Savary, J.; Pignard, K.; Lasset, C.; Escudier, B. Response rate accuracy in oncology trials: Reasons for interobserver variability. Groupe Français d’Immunothérapie of the Fédération Nationale des Centres de Lutte Contre le Cancer. J. Clin. Oncol. 1997, 15, 3507–3514. [Google Scholar] [CrossRef]
  54. Pollard, J.M.; Wen, Z.; Sadagopan, R.; Wang, J.; Ibbott, G.S. The future of image-guided radiotherapy will be MR guided. Br. J. Radiol. 2017, 90, 20160667. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Song, Y.; Erickson, B.; Chen, X.; Li, G.; Wu, G.; Paulson, E.; Knechtges, P.; Li, X.A. Appropriate magnetic resonance imaging techniques for gross tumor volume delineation in external beam radiation therapy of locally advanced cervical cancer. Oncotarget 2018, 9, 10100–10109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Veera, J.; Lim, K.; Dowling, J.A.; O’Connor, C.; Holloway, L.C.; Vinod, S.K. DedicatedMRIsimulation for cervical cancer radiation treatment planning: Assessing the impact on clinical target volume delineation. J. Med. Imaging Radiat. Oncol. 2019, 63, 236–243. [Google Scholar] [CrossRef] [Green Version]
  57. Chavaudra, J.; Bridier, A. Definition of volumes in external radiotherapy: ICRU reports 50 and 62. Cancer Radiother. 2001, 5, 472. [Google Scholar] [CrossRef]
  58. The Royal College of Radiologists, Society of Radiographers, College, Institute of Physics in Medicine, and Engineering. On Target: Ensuring Geometric Accuracy in Radiotherapy; Technical Report; The Royal College of Radiologists RCR: London, UK, 2008. [Google Scholar]
  59. Chan, T.F.; Vese, L.A. Active Contour and Segmentation Models Using Geometric PDE’s for Medical Imaging; Springer: Berlin/Heidelberg, Germany, 2002; pp. 63–75. [Google Scholar]
  60. Jiang, X.; Zhang, R.; Nie, S. Image Segmentation Based on Level Set Method. Phys. Procedia 2012, 33, 840–845. [Google Scholar] [CrossRef] [Green Version]
  61. Boykov, Y.; Jolly, M.-P. Interactive Organ Segmentation Using Graph Cuts. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Pittsburgh, PA, USA, 11–14 October 2000; Springer: Berlin/Heidelberg, Germany, 2000; pp. 276–286. [Google Scholar]
  62. Beucher, S. Use of watersheds in contour detection. In Proceedings of the International Workshop on Image Processing, Real-Time Edge and Motion Detection/Estimation, CCETT, Rennes, France, 17–21 September 1979. [Google Scholar]
  63. Naik, S.; Doyle, S.; Agner, S.; Madabhushi, A.; Feldman, M.; Tomaszewski, J. Automated gland and nuclei segmentation for grading of prostate and breast cancer histopathology. In Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France, 14–17 May 2008; pp. 284–287. [Google Scholar]
  64. Zyout, I.; Abdel-Qader, I.; Jacobs, C. Bayesian Classifier with Simplified Learning Phase for Detecting Microcalcifications in Digital Mammograms. Int. J. Biomed. Imaging 2009, 2009, 767805. [Google Scholar] [CrossRef] [Green Version]
  65. Qiao, J.; Cai, X.; Xiao, Q.; Chen, Z.; Kulkarni, P.; Ferris, C.; Kamarthi, S.; Sridhar, S. Data on MRI brain lesion segmentation using K-means and Gaussian Mixture Model-Expectation Maximization. Data Brief 2019, 27, 104628. [Google Scholar] [CrossRef]
  66. Zhang, Y.; Brady, M.; Smith, S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med. Imaging 2001, 20, 45–57. [Google Scholar] [CrossRef]
  67. Iglesias, J.E.; Sabuncu, M.R. Multi-atlas segmentation of biomedical images: A survey. Med. Image Anal. 2015, 24, 205–219. [Google Scholar] [CrossRef] [Green Version]
  68. Blezek, D.J.; Miller, J.V. Atlas stratification. Med. Image Anal. 2007, 11, 443–457. [Google Scholar] [CrossRef] [PubMed]
  69. Commowick, O.; Malandain, G. Efficient Selection of the Most Similar Image in a Database for Critical Structures Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Brisbane, Australia, 29 October–2 November 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 203–210. [Google Scholar]
  70. Commowick, O.; Warfield, S.K.; Malandain, G. Using Frankenstein’s Creature Paradigm to Build a Patient Specific Atlas. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, London, UK, 20–24 September 2009; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  71. Yang, J.; Amini, A.; Williamson, R.; Zhang, L.; Zhang, Y.; Komaki, R.; Liao, Z.; Cox, J.; Welsh, J.; Court, L.; et al. Automatic contouring of brachial plexus using a multi-atlas approach for lung cancer radiation therapy. Pract. Radiat. Oncol. 2013, 3, e139–e147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Sharp, G.; Fritscher, K.D.; Pekar, V.; Peroni, M.; Shusharina, N.; Veeraraghavan, H.; Yang, J. Vision 20/20: Perspectives on automated image segmentation for radiotherapy. Med. Phys. 2014, 41, 050902. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Harrison, A.; Galvin, J.; Yu, Y.; Xiao, Y. SU-FF-J-172: Deformable Fusion and Atlas Based Autosegmentation: MimVista Vs. CMS Focal ABAS. Med. Phys. 2009, 36, 2517. [Google Scholar] [CrossRef]
  74. La Macchia, M.; Fellin, F.; Amichetti, M.; Cianchetti, M.; Gianolini, S.; Paola, V.; Lomax, A.J.; Widesott, L. Systematic evaluation of three different commercial software solutions for automatic segmentation for adaptive therapy in head-and-neck, prostate and pleural cancer. Radiat. Oncol. 2012, 7, 160. [Google Scholar] [CrossRef] [Green Version]
  75. Menzel, H.-G. International Commission on Radiation Units and Measurements. J. Int. Comm. Radiat. Units Meas. 2014, 14, 1–2. [Google Scholar] [CrossRef]
  76. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  77. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015. [Google Scholar]
  78. Çiçek, Ö. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Cham, Switzerland, 2016; pp. 424–432. [Google Scholar]
  79. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016. [Google Scholar]
  80. Kamnitsas, K.; Ferrante, E.; Parisot, S.; Ledig, C.; Nori, A.V.; Criminisi, A.; Rueckert, D.; Glocker, B. DeepMedic for Brain Tumor Segmentation. In Proceedings of the International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Athens, Greece, 17 October 2016; Springer: Cham, Switzerland, 2016; pp. 138–149. [Google Scholar]
  81. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  82. Liu, X.; Song, L.; Liu, S.; Zhang, Y. A Review of Deep-Learning-Based Medical Image Segmentation Methods. Sustainability 2021, 13, 1224. [Google Scholar] [CrossRef]
  83. Dice, L.R. Measures of the Amount of Ecologic Association between Species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
  84. Nikolov, S.; Blackwell, S.; Zverovitch, A.; Mendes, R.; Livne, M.; De Fauw, J.; Patel, Y.; Meyer, C.; Askham, H.; Romera-Paredes, B.; et al. Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy. arXiv 2018, arXiv:1809.04430. [Google Scholar]
  85. Ge, F.; Wang, S.; Liu, T. New benchmark for image segmentation evaluation. J. Electron. Imaging 2007, 16, 033011. [Google Scholar] [CrossRef]
  86. Huttenlocher, D.; Klanderman, G.; Rucklidge, W. Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 850–863. [Google Scholar] [CrossRef] [Green Version]
  87. Cha, K.H.; Hadjiiski, L.M.; Samala, R.; Chan, H.-P.; Cohan, R.H.; Caoili, E.M.; Paramagul, C.; Alva, A.; Weizer, A.Z. Bladder Cancer Segmentation in CT for Treatment Response Assessment: Application of Deep-Learning Convolution Neural Network—A Pilot Study. Tomography 2016, 2, 421–429. [Google Scholar] [CrossRef] [PubMed]
  88. Ma, X.; Hadjiiski, L.M.; Wei, J.; Chan, H.; Cha, K.H.; Cohan, R.H.; Caoili, E.M.; Samala, R.; Zhou, C.; Lu, Y. U-Net based deep learning bladder segmentation in CT urography. Med. Phys. 2019, 46, 1752–1765. [Google Scholar] [CrossRef] [PubMed]
  89. Duan, C.; Yuan, K.; Liu, F.; Xiao, P.; Lv, G.; Liang, Z. An Adaptive Window-Setting Scheme for Segmentation of Bladder Tumor Surface via MR Cystography. IEEE Trans. Inf. Technol. Biomed. 2012, 16, 720–729. [Google Scholar] [CrossRef] [Green Version]
  90. Duan, C.; Liang, Z.; Bao, S.; Zhu, H.; Wang, S.; Zhang, G.; Chen, J.J.; Lu, H. A Coupled Level Set Framework for Bladder Wall Segmentation With Application to MR Cystography. IEEE Trans. Med. Imaging 2010, 29, 903–915. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Han, H.; Li, L.; Duan, C.; Zhang, H.; Zhao, Y.; Liang, Z. A unified EM approach to bladder wall segmentation with coupled level-set constraints. Med. Image Anal. 2013, 17, 1192–1205. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  92. Qin, X.; Li, X.; Liu, Y.; Lu, H.; Yan, P. Adaptive Shape Prior Constrained Level Sets for Bladder MR Image Segmentation. IEEE J. Biomed. Health Inform. 2013, 18, 1707–1716. [Google Scholar] [CrossRef] [PubMed]
  93. Cha, K.; Hadjiiski, L.; Samala, R.; Chan, H.-P.; Caoili, E.M.; Cohan, R.H. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. Med. Phys. 2016, 43, 1882–1896. [Google Scholar] [CrossRef]
  94. Xu, X.; Zhou, F.; Liu, B. Automatic bladder segmentation from CT images using deep CNN and 3D fully connected CRF-RNN. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 967–975. [Google Scholar] [CrossRef]
  95. Dolz, J.; Xu, X.; Rony, J.; Yuan, J.; Liu, Y.; Granger, E.; Desrosiers, C.; Zhang, X.; Ben Ayed, I.; Lu, H. Multiregion segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks. Med. Phys. 2018, 45, 5482–5493. [Google Scholar] [CrossRef] [Green Version]
  96. Li, R.; Chen, H.; Gong, G.; Wang, L. Bladder Wall Segmentation in MRI Images via Deep Learning and Anatomical Constraints. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1629–1632. [Google Scholar] [CrossRef]
  97. Liu, Z.; Liu, X.; Xiao, B.; Wang, S.; Miao, Z.; Sun, Y.; Zhang, F. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Phys. Med. 2020, 69, 184–191. [Google Scholar] [CrossRef] [Green Version]
  98. Liu, Z.; Liu, X.; Guan, H.; Zhen, H.; Sun, Y.; Chen, Q.; Chen, Y.; Wang, S.; Qiu, J. Development and validation of a deep learning algorithm for auto-delineation of clinical target volume and organs at risk in cervical cancer radiotherapy. Radiother. Oncol. 2020, 153, 172–179. [Google Scholar] [CrossRef]
  99. Wang, Z.; Chang, Y.; Peng, Z.; Lv, Y.; Shi, W.; Wang, F.; Pei, X.; Xu, X.G. Evaluation of deep learning-based auto-segmentation algorithms for delineating clinical target volume and organs at risk involving data for 125 cervical cancer patients. J. Appl. Clin. Med. Phys. 2020, 21, 272–279. [Google Scholar] [CrossRef] [PubMed]
  100. Zhang, D.; Yang, Z.; Jiang, S.; Zhou, Z.; Meng, M.; Wang, W. Automatic segmentation and applicator reconstruction for CT-based brachytherapy of cervical cancer using 3D convolutional neural networks. J. Appl. Clin. Med. Phys. 2020, 21, 158–169. [Google Scholar] [CrossRef]
  101. Rhee, D.J.; Jhingran, A.; Rigaud, B.; Netherton, T.; Cardenas, C.E.; Zhang, L.; Vedam, S.; Kry, S.; Brock, K.K.; Shaw, W.; et al. Automatic contouring system for cervical cancer using convolutional neural networks. Med. Phys. 2020, 47, 5648–5658. [Google Scholar] [CrossRef]
  102. Breto, A.; Zavala-Romero, O.; Asher, D.; Baikovitz, J.; Ford, J.; Stoyanova, R.; Portelance, L. A Deep Learning Pipeline for per-Fraction Automatic Segmentation of GTV and OAR in cervical cancer. Int. J. Radiat. Oncol. 2019, 105, S202. [Google Scholar] [CrossRef] [Green Version]
  103. Wong, J.; Fong, A.; McVicar, N.; Smith, S.; Giambattista, J.; Wells, D.; Kolbeck, C.; Giambattista, J.; Gondara, L.; Alexander, A. Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning. Radiother. Oncol. 2020, 144, 152–158. [Google Scholar] [CrossRef] [PubMed]
  104. Kiljunen, T.; Akram, S.; Niemelä, J.; Löyttyniemi, E.; Seppälä, J.; Heikkilä, J.; Vuolukka, K.; Kääriäinen, O.-S.; Heikkilä, V.-P.; Lehtiö, K.; et al. A Deep Learning-Based Automated CT Segmentation of Prostate Cancer Anatomy for Radiation Therapy Planning-A Retrospective Multicenter Study. Diagnostics 2020, 10, 959. [Google Scholar] [CrossRef] [PubMed]
  105. Zhou, S.; Nie, D.; Adeli, E.; Yin, J.; Lian, J.; Shen, D. High-Resolution Encoder–Decoder Networks for Low-Contrast Medical Image Segmentation. IEEE Trans. Image Process. 2020, 29, 461–475. [Google Scholar] [CrossRef] [PubMed]
  106. Dong, X.; Lei, Y.; Tian, S.; Wang, T.; Patel, P.; Curran, W.J.; Jani, A.B.; Liu, T.; Yang, X. Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network. Radiother. Oncol. 2019, 141, 192–199. [Google Scholar] [CrossRef] [PubMed]
  107. Wang, S.; He, K.; Nie, D.; Zhou, S.; Gao, Y.; Shen, D. CT male pelvic organ segmentation using fully convolutional networks with boundary sensitive representation. Med. Image Anal. 2019, 54, 168–178. [Google Scholar] [CrossRef]
  108. Liu, C.; Gardner, S.J.; Wen, N.; Elshaikh, M.A.; Siddiqui, F.; Movsas, B.; Chetty, I.J. Automatic Segmentation of the Prostate on CT Images Using Deep Neural Networks (DNN). Int. J. Radiat. Oncol. 2019, 104, 924–932. [Google Scholar] [CrossRef]
  109. Kearney, V.P.; Chan, J.W.; Wang, T.; Perry, A.; Yom, S.S.; Solberg, T.D. Attention-enabled 3D boosted convolutional neural networks for semantic CT segmentation using deep supervision. Phys. Med. Biol. 2019, 64, 135001. [Google Scholar] [CrossRef]
  110. He, K.; Cao, X.; Shi, Y.; Nie, D.; Gao, Y.; Shen, D. Pelvic Organ Segmentation Using Distinctive Curve Guided Fully Convolutional Networks. IEEE Trans. Med. Imaging 2019, 38, 585–595. [Google Scholar] [CrossRef]
  111. Kazemifar, S.; Balagopal, A.; Nguyen, D.; McGuire, S.; Hannan, R.; Jiang, S.B.; Owrangi, A.M. Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning. Biomed. Phys. Eng. Express 2018, 4, 055003. [Google Scholar] [CrossRef] [Green Version]
  112. Balagopal, A.; Kazemifar, S.; Nguyen, D.; Lin, M.-H.; Hannan, R.; Owrangi, A.M.; Jiang, S.B. Fully automated organ segmentation in male pelvic CT images. Phys. Med. Biol. 2018, 63, 245015. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  113. Shi, Y.; Yang, W.; Gao, Y.; Shen, D. Does Manual Delineation only Provide the Side Information in CT Prostate Segmentation? In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; pp. 692–700. [Google Scholar] [CrossRef]
  114. Jia, H.; Xia, Y.; Song, Y.; Zhang, D.; Huang, H.; Zhang, Y.; Cai, W. 3D APA-Net: 3D Adversarial Pyramid Anisotropic Convolutional Network for Prostate Segmentation in MR Images. IEEE Trans. Med. Imaging 2019, 39, 447–457. [Google Scholar] [CrossRef] [PubMed]
  115. Khan, Z.; Yahya, N.; Alsaih, K.; Meriaudeau, F. Segmentation of Prostate in MRI Images Using Depth Separable Convolution Operations. In Proceedings of the International Conference on Intelligent Human Computer Interaction, Daegu, Korea, 24–26 November 2020; pp. 132–141. [Google Scholar]
  116. Cem Birbiri, U.; Hamidinekoo, A.; Grall, A.; Malcolm, P.; Zwiggelaar, R. Investigating the Performance of Generative Adversarial Networks for Prostate Tissue Detection and Segmentation. J. Imaging 2020, 6, 83. [Google Scholar] [CrossRef]
  117. Dai, Z.; Carver, E.; Liu, C.; Lee, J.; Feldman, A.; Zong, W.; Pantelic, M.; Elshaikh, M.; Wen, N. Segmentation of the Prostatic Gland and the Intraprostatic Lesions on Multiparametic Magnetic Resonance Imaging Using Mask Region-Based Convolutional Neural Networks. Adv. Radiat. Oncol. 2020, 5, 473–481. [Google Scholar] [CrossRef] [Green Version]
  118. Zhu, Q.; Du, B.; Yan, P. Boundary-Weighted Domain Adaptive Neural Network for Prostate MR Image Segmentation. IEEE Trans. Med. Imaging 2020, 39, 753–763. [Google Scholar] [CrossRef] [PubMed]
  119. Tian, Z.; Li, X.; Zheng, Y.; Chen, Z.; Shi, Z.; Liu, L.; Fei, B. Graph-convolutional-network-based interactive prostate segmentation in MR images. Med. Phys. 2020, 47, 4164–4176. [Google Scholar] [CrossRef] [PubMed]
  120. Aldoj, N.; Biavati, F.; Michallek, F.; Stober, S.; Dewey, M. Automatic prostate and prostate zones segmentation of magnetic resonance images using DenseNet-like U-net. Sci. Rep. 2020, 10, 14315. [Google Scholar] [CrossRef] [PubMed]
  121. Rundo, L.; Han, C.; Zhang, J.; Hataya, R.; Nagano, Y.; Militello, C.; Ferretti, C.; Nobile, M.S.; Tangherloni, A.; Gilardi, M.C.; et al. CNN-Based Prostate Zonal Segmentation on T2-Weighted MR Images: A Cross-Dataset Study. In Neural Approaches to Dynamics of Signal Exchanges; Springer: Singapore, 2020. [Google Scholar]
  122. Savenije, M.H.F.; Maspero, M.; Sikkes, G.G.; van der Voort van Zyp, J.R.; Kotte, A.N.T.J.; Bol, G.H.; van den Berg, C.A. Clinical implementation of MRI-based organs-at-risk auto-segmentation with convolutional networks for prostate radiotherapy. Radiat. Oncol. 2020, 15, 104. [Google Scholar] [CrossRef] [PubMed]
  123. Lu, Z.; Zhao, M.; Pang, Y. CDA-Net for Automatic Prostate Segmentation in MR Images. Appl. Sci. 2020, 10, 6678. [Google Scholar] [CrossRef]
  124. Geng, L.; Wang, J.; Xiao, Z.; Tong, J.; Zhang, F.; Wu, J. Encoder-decoder with dense dilated spatial pyramid pooling for prostate MR images segmentation. Comput. Assist. Surg. 2019, 24, 13–19. [Google Scholar] [CrossRef] [Green Version]
  125. Liu, Z.; Jiang, W.; Lee, K.-H.; Lo, Y.-L.; Ng, Y.-L.; Dou, Q.; Vardhanabhuti, V.; Kwok, K.-W. A Two-Stage Approach for Automated Prostate Lesion Detection and Classification with Mask R-CNN and Weakly Supervised Deep Neural Network. In Proceedings of the Workshop on Artificial Intelligence in Radiation Therapy, Shenzhen, China, 17 October 2019; Springer: Cham, Switzerland, 2019; pp. 43–51. [Google Scholar]
  126. Zabihollahy, F.; Schieda, N.; Jeyaraj, S.K.; Ukwatta, E. Automated segmentation of prostate zonal anatomy on T2-weighted (T2W) and apparent diffusion coefficient (ADC) map MR images using U-Nets. Med. Phys. 2019, 46, 3078–3090. [Google Scholar] [CrossRef]
  127. Liu, Y.; Sung, K.; Yang, G.; Mirak, S.A.; Hosseiny, M.; Azadikhah, A.; Zhong, X.; Reiter, R.E.; Lee, Y.; Raman, S.S. Automatic Prostate Zonal Segmentation Using Fully Convolutional Network With Feature Pyramid Attention. IEEE Access 2019, 7, 163626–163632. [Google Scholar] [CrossRef]
  128. Nie, D.; Wang, L.; Gao, Y.; Lian, J.; Shen, D. STRAINet: Spatially Varying sTochastic Residual AdversarIal Networks for MRI Pelvic Organ Segmentation. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1552–1564. [Google Scholar] [CrossRef]
  129. Taghanaki, S.A.; Zheng, Y.; Zhou, S.K.; Georgescu, B.; Sharma, P.; Xu, D.; Comaniciu, D.; Hamarneh, G. Combo loss: Handling input and output imbalance in multi-organ segmentation. Comput. Med. Imaging Graph. 2019, 75, 24–33. [Google Scholar] [CrossRef] [Green Version]
  130. Elguindi, S.; Zelefsky, M.J.; Jiang, J.; Veeraraghavan, H.; Deasy, J.; Hunt, M.A.; Tyagi, N. Deep learning-based auto-segmentation of targets and organs-at-risk for magnetic resonance imaging only planning of prostate radiotherapy. Phys. Imaging Radiat. Oncol. 2019, 12, 80–86. [Google Scholar] [CrossRef] [Green Version]
  131. Tan, L.; Liang, A.; Li, L.; Liu, W.; Kang, H.; Chen, C. Automatic prostate segmentation based on fusion between deep network and variational methods. J. Xray Sci. Technol. 2019, 27, 821–837. [Google Scholar] [CrossRef] [PubMed]
  132. Yan, K.; Wang, X.; Kim, J.; Khadra, M.; Fulham, M.; Feng, D. A propagation-DNN: Deep combination learning of multi-level features for MR prostate segmentation. Comput. Methods Programs Biomed. 2019, 170, 11–21. [Google Scholar] [CrossRef] [Green Version]
  133. Zhu, Y.; Wei, R.; Gao, G.; Ding, L.; Zhang, X.; Wang, X.; Zhang, J. Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. J. Magn. Reson. Imaging 2019, 49, 1149–1156. [Google Scholar] [CrossRef]
  134. Alkadi, R.; Taher, F.; El-Baz, A.; Werghi, N. A Deep Learning-Based Approach for the Detection and Localization of Prostate Cancer in T2 Magnetic Resonance Images. J. Digit. Imaging 2019, 32, 793–807. [Google Scholar] [CrossRef]
  135. Ghavami, N.; Hu, Y.; Gibson, E.; Bonmati, E.; Emberton, M.; Moore, C.M.; Barratt, D.C. Automatic segmentation of prostate MRI using convolutional neural networks: Investigating the impact of network architecture on the accuracy of volume measurement and MRI-ultrasound registration. Med. Image Anal. 2019, 58, 101558. [Google Scholar] [CrossRef]
  136. Zhang, Y.; Wu, J.; Chen, W.; Chen, Y.; Tang, X. Prostate Segmentation Using Z-Net. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 11–14. [Google Scholar]
  137. Feng, Z.; Nie, D.; Wang, L.; Shen, D. Semi-supervised learning for pelvic MR image segmentation based on multi-task residual fully convolutional networks. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 885–888. [Google Scholar] [CrossRef]
  138. Han, C.; Zhang, J.; Hataya, R.; Nagano, Y.; Nakayama, H.; Rundo, L. Prostate zonal segmentation using deep learning. IEICE Tech. Rep. 2018, 117, 69–70. [Google Scholar]
  139. Brosch, T.; Peters, J.; Groth, A.; Stehle, T.; Weese, J. Deep Learning-Based Boundary Detection for Model-Based Segmentation with Application to MR Prostate Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Cham, Switzerland, 2018; pp. 515–522. [Google Scholar]
  140. Kang, J.; Samarasinghe, G.; Senanayake, U.; Conjeti, S.; Sowmya, A. Deep Learning for Volumetric Segmentation in Spatio-Temporal Data: Application to Segmentation of Prostate in DCE-MRI. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 61–65. [Google Scholar]
  141. Drozdzal, M.; Chartrand, G.; Vorontsov, E.; Shakeri, M.; Di Jorio, L.; Tang, A.; Romero, A.; Bengio, Y.; Pal, C.; Kadoury, S. Learning normalized inputs for iterative estimation in medical image segmentation. Med. Image Anal. 2018, 44, 1–13. [Google Scholar] [CrossRef] [Green Version]
  142. To, M.N.N.; Vu, D.Q.; Turkbey, B.; Choyke, P.L.; Kwak, J.T. Deep dense multi-path neural network for prostate segmentation in magnetic resonance imaging. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1687–1696. [Google Scholar] [CrossRef] [PubMed]
  143. Karimi, D.; Samei, G.; Kesch, C.; Nir, G.; Salcudean, S. Prostate segmentation in MRI using a convolutional neural network architecture and training strategy based on statistical shape models. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1211–1219. [Google Scholar] [CrossRef]
  144. Zhu, Q.; Du, B.; Turkbey, B.; Choyke, P.L.; Yan, P. Deeply-supervised CNN for prostate segmentation. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 178–184. [Google Scholar]
  145. Zhu, Y.; Wang, L.; Liu, M.; Qian, C.; Yousuf, A.; Oto, A.; Shen, D. MRI-based prostate cancer detection with high-level representation and hierarchical classification. Med. Phys. 2017, 44, 1028–1039. [Google Scholar] [CrossRef] [PubMed]
  146. Cheng, R.; Roth, H.R.; Lay, N.; Lu, L.; Turkbey, B.; Gandler, W.; McCreedy, E.S.; Pohida, T.; Pinto, P.A.; Choyke, P.; et al. Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks. J. Med. Imaging 2017, 4, 041302. [Google Scholar] [CrossRef] [PubMed]
  147. Clark, T.; Zhang, J.; Baig, S.; Wong, A.; Haider, M.A.; Khalvati, F. Fully automated segmentation of prostate whole gland and transition zone in diffusion-weighted MRI using convolutional neural networks. J. Med. Imaging 2017, 4, 041307. [Google Scholar] [CrossRef] [PubMed]
  148. Yu, L.; Yang, X.; Chen, H.; Qin, J.; Heng, P. Volumetric ConvNets with Mixed Residual Connections for Automated Prostate Segmentation from 3D MR Images. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
  149. Guo, Y.; Gao, Y.; Shen, D. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE Trans. Med. Imaging 2016, 35, 1077–1089. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  150. Liao, S.; Gao, Y.; Oto, A.; Shen, D. Representation Learning: A Unified Deep Learning Framework for Automatic Prostate MR Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 254–261. [Google Scholar]
  151. Men, K.; Boimel, P.; Janopaul-Naylor, J.; Zhong, H.; Huang, M.; Geng, H.; Cheng, C.; Fan, Y.; Plastaras, J.P.; Ben-Josef, E.; et al. Cascaded atrous convolution and spatial pyramid pooling for more accurate tumor target segmentation for rectal cancer radiotherapy. Phys. Med. Biol. 2018, 63, 185016. [Google Scholar] [CrossRef]
  152. Men, K.; Dai, J.; Li, Y. Automatic segmentation of the clinical target volume and organs at risk in the planning CT for rectal cancer using deep dilated convolutional neural networks. Med. Phys. 2017, 44, 6377–6389. [Google Scholar] [CrossRef]
  153. Zhao, X.; Xie, P.; Wang, M.; Li, W.; Pickhardt, P.J.; Xia, W.; Xiong, F.; Zhang, R.; Xie, Y.; Jian, J.; et al. Deep learning–based fully automated detection and segmentation of lymph nodes on multiparametric-mri for rectal cancer: A multicentre study. EBioMedicine 2020, 56, 102780. [Google Scholar] [CrossRef]
  154. Wang, M.; Xie, P.; Ran, Z.; Jian, J.; Zhang, R.; Xia, W.; Yu, T.; Ni, C.; Gu, J.; Gao, X.; et al. Full convolutional network based multiple side-output fusion architecture for the segmentation of rectal tumors in magnetic resonance images: A multi-vendor study. Med. Phys. 2019, 46, 2659–2668. [Google Scholar] [CrossRef] [PubMed]
  155. Wang, J.; Lu, J.; Qin, G.; Shen, L.; Sun, Y.; Ying, H.; Zhang, Z.; Hu, W. Technical Note: A deep learning-based autosegmentation of rectal tumors in MR images. Med. Phys. 2018, 45, 2560–2564. [Google Scholar] [CrossRef] [PubMed]
  156. Huang, Y.-J.; Dou, Q.; Wang, Z.-X.; Liu, L.-Z.; Wang, L.-S.; Chen, H.; Heng, P.-A.; Xu, R.-H. HL-FCN: Hybrid loss guided FCN for colorectal cancer segmentation. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 195–198. [Google Scholar]
  157. Trebeschi, S.; Van Griethuysen, J.J.M.; Lambregts, D.; Lahaye, M.J.; Parmar, C.; Bakers, F.C.H.; Peters, N.H.G.M.; Beets-Tan, R.G.H.; Aerts, H.J.W.L. Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR. Sci. Rep. 2017, 7, 5301. [Google Scholar] [CrossRef]
  158. McVeigh, P.Z.; Syed, A.M.; Milosevic, M.; Fyles, A.; Haider, M.A. Diffusion-weighted MRI in cervical cancer. Eur. Radiol. 2008, 18, 1058–1064. [Google Scholar] [CrossRef]
  159. Niaf, E.; Rouviere, O.; Mège-Lechevallier, F.; Bratan, F.; Lartizien, C. Computer-aided diagnosis of prostate cancer in the peripheral zone using multiparametric MRI. Phys. Med. Biol. 2012, 57, 3833–3851. [Google Scholar] [CrossRef] [PubMed]
  160. Toth, R.; Ribault, J.; Gentile, J.; Sperling, D.; Madabhushi, A. Simultaneous segmentation of prostatic zones using Active Appearance Models with multiple coupled levelsets. Comput. Vis. Image Underst. 2013, 117, 1051–1060. [Google Scholar] [CrossRef] [Green Version]
  161. Qiu, W.; Yuan, J.; Ukwatta, E.; Sun, Y.; Rajchl, M.; Fenster, A. Dual optimization based prostate zonal segmentation in 3D MR images. Med. Image Anal. 2014, 18, 660–673. [Google Scholar] [CrossRef]
  162. Makni, N.; Iancu, A.; Colot, O.; Puech, P.; Mordon, S.; Betrouni, N. Zonal segmentation of prostate using multispectral magnetic resonance images. Med. Phys. 2011, 38, 6093–6105. [Google Scholar] [CrossRef] [PubMed]
  163. Litjens, G.; Toth, R.; van de Ven, W.; Hoeks, C.; Kerkstra, S.; van Ginneken, B.; Vincent, G.; Guillard, G.; Birbeck, N.; Zhang, J.; et al. Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Med. Image Anal. 2014, 18, 359–373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  164. Bloch, N.; Madabhushi, A.; Huisman, H.; Freymann, J.; Kirby, J.; Grauer, M.; Enquobahrie, C.J.; Clarke, L.; Farahani, K. NCI-ISBI 2013 Challenge: Automated Segmentation of Prostate Structures. Cancer Imaging Arch. 2015, 370, 6. [Google Scholar]
  165. Litjens, G.; Debats, O.; Barentsz, J.; Karssemeijer, N.; Huisman, H. ProstateX Challenge Database. 2017. Available online: https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=23691656 (accessed on 8 August 2021).
  166. Yu, L.; Cheng, J.-Z.; Dou, Q.; Yang, X.; Chen, H.; Qin, J.; Heng, P.-A. Automatic 3D Cardiovascular MR Segmentation with Densely-Connected Volumetric ConvNets. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; Springer: Cham, Switzerland, 2017; pp. 287–295. [Google Scholar]
  167. Lemaître, G.; Martí, R.; Freixenet, J.; Vilanova, J.C.; Walker, P.M.; Meriaudeau, F. Computer-Aided Detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: A review. Comput. Biol. Med. 2015, 60, 8–31. [Google Scholar] [CrossRef] [Green Version]
  168. Saidu, C.; Csato, L. Medical Image Analysis with Semantic Segmentation and Active Learning. Studia Univ. Babeș-Bolyai Inform. 2019, 64, 26–38. [Google Scholar] [CrossRef]
  169. The Brigham and Women’s Hospital, (BWH). BWH Prostate MR Image Database. 2008. Available online: https://central.xnat.org/data/projects/NCIGT_PROSTATE (accessed on 8 August 2021).
  170. Pekar, V.; McNutt, T.R.; Kaus, M.R. Automated model-based organ delineation for radiotherapy planning in prostatic region. Int. J. Radiat. Oncol. 2004, 60, 973–980. [Google Scholar] [CrossRef]
  171. Pasquier, D.; Lacornerie, T.; Vermandel, M.; Rousseau, J.; Lartigau, E.; Betrouni, N. Automatic Segmentation of Pelvic Structures From Magnetic Resonance Images for Prostate Cancer Radiotherapy. Int. J. Radiat. Oncol. 2007, 68, 592–600. [Google Scholar] [CrossRef] [PubMed]
  172. Kaur, H.; Choi, H.; You, Y.N.; Rauch, G.M.; Jensen, C.T.; Hou, P.; Chang, G.J.; Skibber, J.M.; Ernst, R.D. MR Imaging for Preoperative Evaluation of Primary Rectal Cancer: Practical Considerations. Radiographics 2012, 32, 389–409. [Google Scholar] [CrossRef]
  173. Hernando-Requejo, O.; López, M.; Cubillo, A.; Rodriguez, A.; Ciervide, R.; Valero, J.; Sánchez, E.; Garcia-Aranda, M.; Rodriguez, J.; Potdevin, G.; et al. Complete pathological responses in locally advanced rectal cancer after preoperative IMRT and integrated-boost chemoradiation. Strahlenther. Onkol. 2014, 190, 515–520. [Google Scholar] [CrossRef] [PubMed]
  174. Schipaanboord, B.; Boukerroui, D.; Peressutti, D.; van Soest, J.; Lustberg, T.; Dekker, A.; van Elmpt, W.; Gooding, M.J. An Evaluation of Atlas Selection Methods for Atlas-Based Automatic Segmentation in Radiotherapy Treatment Planning. IEEE Trans. Med. Imaging 2019, 38, 2654–2664. [Google Scholar] [CrossRef] [Green Version]
  175. Fritscher, K.D.; Peroni, M.; Zaffino, P.; Spadea, M.F.; Schubert, R.; Sharp, G. Automatic segmentation of head and neck CT images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours. Med. Phys. 2014, 41, 051910. [Google Scholar] [CrossRef] [PubMed]
  176. Losnegård, A.; Hysing, L.; Kvinnsland, Y.; Muren, L.; Munthe-Kaas, A.Z.; Hodneland, E.; Lundervold, A. Semi-Automatic Segmentaiton of the Large Intestine for Radiotherapy Planning Using the Fast-Marching Method. Radiother. Oncol. 2009, 92, S75. [Google Scholar] [CrossRef]
  177. Haas, B.; Coradi, T.; Scholz, M.; Kunz, P.; Huber, M.; Oppitz, U.; André, L.; Lengkeek, V.; Huyskens, D.; Van Esch, A.; et al. Automatic segmentation of thoracic and pelvic CT images for radiotherapy planning using implicit anatomic knowledge and organ-specific segmentation strategies. Phys. Med. Biol. 2008, 53, 1751–1771. [Google Scholar] [CrossRef]
  178. Gambacorta, M.; Valentini, C.; DiNapoli, N.; Mattiucci, G.; Pasini, D.; Barba, M.; Manfrida, S.; Boldrini, L.; Caria, N.; Valentini, V. Atlas-based Auto-segmentation Clinical Validation of Pelvic Volumes and Normal Tissue in Rectal Tumors. Int. J. Radiat. Oncol. 2012, 84, S347–S348. [Google Scholar] [CrossRef]
  179. Oakden-Rayner, L. Exploring Large-scale Public Medical Image Datasets. Acad. Radiol. 2020, 27, 106–112. [Google Scholar] [CrossRef] [Green Version]
  180. Cuocolo, R.; Stanzione, A.; Castaldo, A.; De Lucia, D.R.; Imbriaco, M. Quality control and whole-gland, zonal and lesion annotations for the PROSTATEx challenge public dataset. Eur. J. Radiol. 2021, 138, 109647. [Google Scholar] [CrossRef]
  181. Wang, H.; Zhu, Y.; Green, B.; Adam, H.; Yuille, A.; Chen, L.-C. Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 108–126. [Google Scholar]
  182. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 213–229. [Google Scholar]
  183. Ranjbarzadeh, R.; Kasgari, A.B.; Ghoushchi, S.J.; Anari, S.; Naseri, M.; Bendechache, M. Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci. Rep. 2021, 11, 10930. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Rapid rise in the number of publications for DL-based segmentation research in medical imaging where almost half of studies were cancer-related between 2016 and 2020.
Figure 1. Rapid rise in the number of publications for DL-based segmentation research in medical imaging where almost half of studies were cancer-related between 2016 and 2020.
Diagnostics 11 01964 g001
Figure 2. Illustration of (a) convolutional neural networks (CNN) with fully-connected final layers for classification tasks, (b) fully-convolutional network (FCN) for image-to-image or image-to-mask translations and (c) U-Net architecture with skip connections between encoder and decoder in the network for more efficient feature extraction/reconstruction than FCN.
Figure 2. Illustration of (a) convolutional neural networks (CNN) with fully-connected final layers for classification tasks, (b) fully-convolutional network (FCN) for image-to-image or image-to-mask translations and (c) U-Net architecture with skip connections between encoder and decoder in the network for more efficient feature extraction/reconstruction than FCN.
Diagnostics 11 01964 g002
Figure 3. Boxplot of number of training patients used in segmentation applications for bladder (CT studies: 4, MRI studies: 2), cervical (CT:2, MRI:5), prostate (CT:12, MRI:40) and rectal (CT:2, MRI:6, CT/MRI:1) cancers. The average number of training patients was 165 from the 74 reviewed studies. The outliers were excluded from this figure for visualization purposes.
Figure 3. Boxplot of number of training patients used in segmentation applications for bladder (CT studies: 4, MRI studies: 2), cervical (CT:2, MRI:5), prostate (CT:12, MRI:40) and rectal (CT:2, MRI:6, CT/MRI:1) cancers. The average number of training patients was 165 from the 74 reviewed studies. The outliers were excluded from this figure for visualization purposes.
Diagnostics 11 01964 g003
Table 1. Summary of previous publications using DL-based automatic segmentation separated by pelvic anatomical regions (Bladder: 6, Cervix: 7, Prostate: 52, Rectum: 9 studies). The DSC and IoU are shown, where reported, with the DSC metrics in bold (for studies with multiple test results, the metrics calculated on public/external databases are presented). For studies that reported neither DSC nor IoU, the metrics used by the authors are included. MRI acquisition modes (2D, 3D) were retrieved based on the information provided in each published article and/or supplementary documents.
Table 1. Summary of previous publications using DL-based automatic segmentation separated by pelvic anatomical regions (Bladder: 6, Cervix: 7, Prostate: 52, Rectum: 9 studies). The DSC and IoU are shown, where reported, with the DSC metrics in bold (for studies with multiple test results, the metrics calculated on public/external databases are presented). For studies that reported neither DSC nor IoU, the metrics used by the authors are included. MRI acquisition modes (2D, 3D) were retrieved based on the information provided in each published article and/or supplementary documents.
Image ModalityDeep Learning StrategyDL Network DimensionNumber of Patients (Train/Test)Segmentation Evaluation MetricsYearReference
(MR Acquisition Mode)
Bladder Cancer
CTU-Net2D/3D81/92Bladder (IoU: 0.85/0.82)2019[88]
CTCNN + FCN
(CRF-RNN)
3D100/24Bladder (DSC: 0.92)2018[94]
CTCNN2D62 leave-one-out cross validationBladder Tumor (area under the ROC curve (AUC): 0.73)2016[87]
CTCNN2D81/92Bladder (IoU: 0.76)2016[93]
T2W (2D),
DW (2D) MRI
AE + modified residual network (BW-Net)2D144/25Bladder Wall (DSC: 0.85)2020[96]
T2W MRI (3D)U-Net with progressive dilated convolutions
(U-Net Progressive)
2D40/15Bladder Tumor (DSC: 0.68),
Outer Wall (DSC: 0.83),
Inner Wall (DSC: 0.98)
2018[95]
Cervical Cancer
CTU-Net with context aggregation blocks (CabUNet)2D77/14Bladder (DSC: 0.90),
Bone Marrow (DSC: 0.85),
L Fem. Head (DSC: 0.90),
R Fem. Head (DSC: 0.90),
Rectum (DSC: 0.79),
Small Intestine (DSC: 0.83),
Spinal Cord (DSC: 0.82)
2020[97]
CTDual path U-Net (DpnUNet)2.5D210 five-fold cross validationCTV (DSC: 0.86),
Bladder (DSC: 0.91),
Bone Marrow (DSC: 0.85),
L Fem. Head (DSC: 0.90),
R Fem. Head (DSC: 0.90),
Rectum (DSC: 0.82),
Bowel Bag (DSC: 0.85),
Spinal Cord (DSC: 0.82)
2020[98]
CTU-Net3D100/25CTV (DSC: 0.86),
Bladder (DSC: 0.88),
Rectum (DSC: 0.81),
L Fem. Head (DSC: 0.88),
R Fem. Head (DSC: 0.88),
Small Intestine (DSC: 0.86)
2020[99]
CTU-Net with residual connection, dilated convolution and deep supervision (DSD-UNet)3D73/18High-risk CTV
(DSC: 0.82, IOU: 0.72),
Bladder
(DSC: 0.86, IOU: 0.77),
Rectum
(DSC: 0.82, IOU: 0.71),
Small Intestine
(DSC: 0.80, IOU: 0.69),
Sigmoid
(DSC: 0.64, IOU: 0.52)
2020[100]
CTV-Net3D2464/140 (+30 external test patients)Primary CTV (UteroCervix) (DSC: 0.85),
Nodal CTV (DSC: 0.86),
PAN CTV (DSC: 0.76),
Bladder (DSC: 0.89),
Rectum (DSC: 0.81),
Spinal Cord (DSC: 0.90),
L Femur (DSC: 0.94),
R Femur (DSC: 0.93),
L Kidney (DSC: 0.94),
R Kidney (DSC: 0.95),
Pelvic Bone (DSC: 0.93),
Sacrum (DSC: 0.91),
L4 Vertebral Body
(DSC: 0.91),
L5 Vertebral Body
(DSC: 0.90)
2020[101]
MRI (unspecified)Mask R-CNN2D5 (646 images split 9:1 for training and testing)GTV + Cervix (DSC: 0.84),
Uterus (DSC: 0.92),
Sigmoid (DSC: 0.89),
Bladder (DSC: 0.90),
Rectum (DSC: 0.89),
Parametrium (DSC: 0.66),
Vagina (DSC: 0.71),
Mesorectum (DSC: 0.68),
Femur (DSC: 0.81)
2019[102]
DW MRI (2D)U-Net2D144/25Cervical Tumor (DSC: 0.82)2019[17]
Prostate Cancer
CTU-Net (External commercial software)2D328/20Prostate (DSC: 0.79),
Bladder (DSC: 0.97),
Rectum (DSC: 0.78),
Fem. Head (DSC: 0.91),
Seminal Vesicles (DSC: 0.64)
2020[103]
CTU-Net3D900/30Prostate (DSC: 0.82),
Bladder (DSC: 0.93),
Rectum (DSC: 0.84),
L Fem. Head (DSC: 0.68),
R Fem. Head (DSC: 0.69),
Lymph Nodes (DSC: 0.80),
Seminal Vesicles (DSC: 0.72)
2020[104]
CTHigh-resolution multi-scale encoder-decoder network (HMEDN)2D180/100Prostate (DSC: 0.88),
Bladder (DSC: 0.94),
Rectum (DSC: 0.87)
2019[105]
CT/
Synthetic T2W MRI
CT-to-MR
synthesis + Deep Attention
U-Net (DAUNet)
3D112/28 five-fold cross validationProstate (DSC: 0.87),
Bladder (DSC: 0.95),
Rectum (DSC: 0.89)
2019[106]
CTModified U-Net3D313 five-fold cross validationProstate: (DSC: 0.89),
Bladder: (DSC: 0.94),
Rectum: (DSC: 0.89)
2019[107]
CTDeep Neural Network (DNN)3D771/140Prostate (DSC: 0.88)2019[108]
CTDeeply-supervised attention-enabled boosted convolutional neural network
(DAB-CNN)
3D80/20Prostate (DSC: 0.90),
Bladder (DSC: 0.93),
Rectum (DSC: 0.83),
Penile bulb (DSC: 0.72)
2019[109]
CTDistinctive curve guided fully convolutional network (FCN)2D313 five-fold cross validationProstate (DSC: 0.89),
Bladder (DSC: 0.94),
Rectum (DSC: 0.89)
2019[110]
CTU-Net2D60/25Prostate: (DSC: 0.88),
Bladder: DSC: 0.95),
Rectum: (DSC: 0.92)
2018[111]
CT2D U-Net +
3D U-Net with aggregated residual networks (ResNeXt)
2D/3D108/28 four-fold cross validationProstate (DSC: 0.90),
Bladder (DSC: 0.95),
Rectum (DSC: 0.84),
L Fem. Head (DSC: 0.96),
R Fem. Head (DSC: 0.95)
2018[112]
CTCNN + multi-atlas fusion2D92 five-fold cross validation Prostate (DSC: 0.86)2017[31]
CTFCN (based on
LeNet)
2D22 two-fold cross validationProstate (DSC: 0.89)2017[113]
T2W MRI (2D)Adversarial pyramid anisotropic convolutional deep neural network
(APA-Net)
3D110 three-fold cross validationWhole Prostate Gland
(DSC: 0.90)
2020[114]
T2W MRI (2D/3D)DeeplabV3+2D40Prostate Central Gland (DSC: 0.81),
Peripheral Zone (DSC: 0.70)
2020[115]
T2W (2D),
DW (2D) MRI
Conditional GAN (cGAN)/Cycle-consistent GAN (Cycle-GAN)2D40/50Whole Prostate Gland
(DSC: 0.75)
2020[116]
T2W (2D),
DW (2D) MRI
Mask R-CNN2D54/16 (+12 external test patients)Whole Prostate Gland
(DSC: 0.86),
Prostate Tumor (DSC: 0.56)
2020[117]
T2W MRI (2D)Boundary-weighted domain adaptive neural network
(BOWDA-Net)
3D40/146Whole Prostate Gland
(DSC: 0.91)
Prostate Base (DSC: 0.89)
Prostate Apex (DSC: 0.89)
2020[118]
T2W MRI (2D)Graph convolutional network
(GCN)
2D140 five-fold cross validationWhole Prostate Gland
(DSC: 0.93)
2020[119]
T2W MRI (2D)Dense U-Net2D141/47
four-fold cross validation
Whole Prostate Gland
(DSC: 0.92),
Central Gland (DSC: 0.89),
Peripheral Zone (DSC: 0.78)
2020[120]
T2W MRI (2D)U-Net/Pix2pix2D40 four-fold cross validationProstate Central Gland (DSC: 0.86–0.88),
Peripheral Zone
(DSC: 0.90–0.83)
2020[121]
T1W (3D), T2W (unspecified) MRIMulti-scale DeepMedic3D97/53 three-fold cross validationBladder (DSC: 0.96),
Rectum (DSC: 0.88),
L femur (DSC: 0.97),
R femur (DSC: 0.97)
2020[122]
T2W MRI (2D)Cascaded dual attention network (CDA-Net)3D40/109Whole Prostate Gland
(DSC: 0.92)
2020[123]
T2W MRI (2D)Encoder-Decoder structure with dense dilated spatial pyramid pooling (DDSPP)2D150Whole Prostate Gland
(DSC: 0.95)
2019[124]
T2W (2D),
DW (2D) MRI
Mask R-CNN2D36 (split 7:2:1 for training, validation and testing)Whole Prostate Gland
(IoU: 0.84),
Prostate Tumor (IoU: 0.40),
Central Gland (IoU: 0.78),
Peripheral Zone (IoU: 0.51)
2019[125]
T2W (2D),
DW (2D) MRI
U-Net2D100/125Whole Prostate Gland
(DSC: 0.84),
Central Gland (DSC: 0.78),
Peripheral Zone (DSC: 0.69)
2019[126]
T2W MRI (2D)FCN with feature pyramid attention2D250/63 (+46 external test patients)Prostate Transition Zone (DSC: 0.79),
Peripheral zone (DSC: 0.74)
2019[127]
T2W MRI (3D)Spatially-varying stochastic residual adversarial network (STRAINet)3D50 five-fold cross validationWhole Prostate Gland
(DSC: 0.91),
Bladder (DSC: 0.97),
Rectum (DSC: 0.91)
2019[128]
T2W MRI (2D)U-Net with
“combo loss”
3D700/258Whole Prostate Gland
(DSC: 0.91)
2019[129]
T2W MRI
(unspecified)
DeepLabV3+2D40/50CTV (DSC: 0.83),
Bladder (DSC: 0.93),
Rectum (DSC: 0.82),
Penile Bulb (DSC: 0.74),
Urethra (DSC: 0.69),
Rectal Spacer (DSC: 0.81)
2019[130]
T2W MRI (2D)V-Net + variational methods3D85Whole Prostate Gland
(DSC: 0.64)
2019[131]
T2W MRI (2D)Propagation Deep Neural Network
(P-DNN)
2D50/30Whole Prostate Gland: (DSC: 0.84)2019[132]
T2W (2D),
DW (2D) MRI
Cascaded U-Net2D76/51Whole Prostate Gland
(DSC: 0.92),
Peripheral zone (DSC: 0.79)
2019[133]
T2W MRI (3D)Multi-view CNN2D19 leave-one-out cross validationProstate Tumor (DSC: 0.92, IoU: 0.67),
Prostate Central Gland
(IoU: 0.65),
Peripheral Zone (IoU: 0.59)
2019[134]
T2W MRI (2D)Investigative CNN study (U-Net,
V-Net, HighRes3dNet, HolisticNet, Dense V-Net, Adapted
U-Net)
3D173/59Whole Prostate Gland
(DSC: 0.87)
2019[135]
T2W MRI (2D)Z-Net2D45/30Whole Prostate Gland
(DSC: 0.90)
2019[136]
T2W MRI (3D)FCN3D60/10Whole Prostate Gland
(DSC: 0.89),
Bladder (DSC: 0.95),
Rectum (DSC: 0.88)
2018[137]
T2W MRI (2D)SegNet2D16/5 (+19 external test patients)Whole Prostate Gland
(DSC: 0.75)
2018[138]
T2W MRI (2D)CNN + Boundary Detection3D50 five-fold cross validationWhole Prostate Gland
(DSC: 0.90)
2018[139]
Dynamic Contrast-Enhanced (DCE) MRI (3D)U-Net + Long-Short-Term Memory (LSTM)3D(15/2) three-fold cross validationWhole Prostate Gland
(DSC: 0.86)
2018[140]
T2W MRI (2D)FCN2D50/30Whole Prostate Gland
(DSC: 0.87)
2018[141]
T2W MRI (2D)CNN2D20Whole Prostate Gland
(DSC: 0.85)
2018[30]
T2W MRI (2D)CNN (PSNet)3D112/28 five-fold cross validationWhole Prostate Gland
(DSC: 0.85)
2018[29]
T2W (2D),
DW (2D) MRI
Deep dense
multi-path CNN
3D100/50 (+30 external test patients)Whole Prostate Gland
(DSC: 0.95)
2018[142]
T2W MRI (2D)U-Net3D26Whole Prostate Gland
(DSC: 0.88)
2018[143]
T2W MRI (2D)Deeply-supervised
CNN
2D77/4Whole Prostate Gland
(DSC: 0.89)
2017[144]
T2W (2D),
DW (2D) MRI
Auto-Encoder2D21 leave-one-out cross validationProstate Tumor (section-based evaluation (SBE): 0.89, sensitivity: 91%,
specificity: 88%)
2017[145]
T2W MRI (2D)Holistically-nested FCN2D250 five-fold cross validationWhole Prostate Gland
(DSC: 0.89, IoU: 0.81)
2017[146]
DW MRI (2D)Modified U-Net with inception blocks2D141 four-fold cross validationWhole Prostate Gland
(DSC: 0.93),
Transition Zone (DSC: 0.88)
2017[147]
T2W MRI (2D)ConvNet with mixed residual connections3D50/30Whole Prostate Gland
(DSC: 0.87)
2017[148]
T2W MRI (2D) Stacked Sparse AE (SSAE) + Sparse patch matching2D66 two-fold cross validationWhole Prostate Gland
(DSC: 0.87)
2016[149]
T2W MRI (2D)V-Net3D50/30Whole Prostate Gland
(DSC: 0.87)
2016[79]
T2W MRI
(unspecified)
Stacked independent subspace analysis (ISA)2D30 leave-one-out cross validationWhole Prostate Gland
(DSC: 0.86)
2013[150]
Rectal Cancer
CTDeepLabV3+2D98/63CTV (DSC: 0.88),
Bladder (DSC: 0.90),
Small Intestine (DSC: 0.76),
L Fem. Head (DSC: 0.93),
R Fem. Head (DSC: 0.93)
2020[32]
CT/
T2W MRI (2D)
CNN with cascaded atrous convolution (CAC) and spatial pyramid pooling module (SPP)2D100/70
five-fold cross validation
Rectal Tumor (DSC: 0.78)
CTV (DSC: 0.85)
2018[151]
CTDilated CNN
(transfer learning from VGG-16)
2D218/60CTV (DSC: 0.87),
Bladder (DSC: 0.93),
L Fem. Head (DSC: 0.92),
R Fem. Head (DSC: 0.92),
Intestine (DSC: 0.65),
Colon (DSC: 0.62)
2017[152]
T2W (2D),
DW (2D) MRI
Mask R-CNN2D293/31 (+50 external test patients)Lymph Nodes (DSC: 0.81)2020[153]
T2W MRI (2D)CNN (transfer learning from ResNet50)2D461/107Rectal Tumor (DSC: 0.82)2019[154]
T2W MRI (3D)U-Net2D93 ten-foldcross validationRectal GTV
(DSC: 0.74, IoU: 0.60)
2018[155]
T2W MRI (2D)FCN (transfer learning from VGG-16)2D410/102Rectal Tumor (DSC: 0.84)2018[28]
T2W MRI (2D)Hybrid loss FCN
(HL-FCN)
3D64 four-fold cross validationRectal Tumor (DSC: 0.72)2018[156]
T2W (unspecified), DW (2D) MRICNN2D70/70Rectal Tumor (DSC: 0.69)2017[157]
Table 2. Public datasets available for prostate cancer segmentation along with the studies that their results were evaluated on in these databases. T1W: T1-weighted; T2W: T2-weighted; DW: Diffusion-weighted; PDW: Proton density-weighted; DCE: Dynamic contrast-enhanced; MRSI: Magnetic resonance spectroscopic imaging.
Table 2. Public datasets available for prostate cancer segmentation along with the studies that their results were evaluated on in these databases. T1W: T1-weighted; T2W: T2-weighted; DW: Diffusion-weighted; PDW: Proton density-weighted; DCE: Dynamic contrast-enhanced; MRSI: Magnetic resonance spectroscopic imaging.
DatasetImage Modality (MRI Acquisition Mode)Number of PatientsGround-Truth ContoursURLStudies
PROMISE12 [163]T2W MRI (2D)80Whole Prostate Glandhttps://promise12.grand-challenge.org/
[Accessed 21 October 2021]
[29,79,114,116,118,119,123,124,128,130,131,132,133,136,141,142,143,147,148]
I2CVB [167]T2W (2D/3D),40Whole Prostate Gland, Peripheral Zone, Central Gland, Prostate Tumorhttps://i2cvb.github.io/
[Accessed 21 October 2021]
[115,125,134,138,140,168]
DW (2D),
DCE (3D),
MRSI (3D) MRI
BWH [169]T1W (2D/3D),230Whole Prostate Glandhttps://prostatemrimagedatabase.com/
[Accessed 21 October 2021]
[118,131]
T2W (2D) MRI
ASPS13 [164]T1W (2D), 156Whole Prostate Gland, Peripheral Zonehttps://wiki.cancerimagingarchive.net/display/Public/NCI-ISBI+2013+Challenge+-+Automated+Segmentation+of+Prostate+Structures
[Accessed 21 October 2021]
[29,114,123,124]
T2W (2D),
DCE (3D) MRI
PROSTATEx [165]T2W (2D), 330
(malignant lesions: 76, benign lesions: 245)
Prostate Tumorhttps://prostatex.grand-challenge.org/
[Accessed 21 October 2021]
[120,125,127,129]
DW (2D),
PDW (3D),
DCE (3D) MRI
PROMISE12: MICCAI Grand Prostate MR Image Segmentation 2012; I2CVB: Initiative for Collaborative Computer Vision Benchmarking; BWH: The Brigham and Women’s Hospital Database; ASPS13: NCI-ISBI 2013 Challenge for Automatic Segmentation of Prostate Structures; PROSTATEx: SPIE-AAPM-NCI Prostate MR Classification Challenge.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kalantar, R.; Lin, G.; Winfield, J.M.; Messiou, C.; Lalondrelle, S.; Blackledge, M.D.; Koh, D.-M. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics 2021, 11, 1964. https://doi.org/10.3390/diagnostics11111964

AMA Style

Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh D-M. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics. 2021; 11(11):1964. https://doi.org/10.3390/diagnostics11111964

Chicago/Turabian Style

Kalantar, Reza, Gigin Lin, Jessica M. Winfield, Christina Messiou, Susan Lalondrelle, Matthew D. Blackledge, and Dow-Mu Koh. 2021. "Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges" Diagnostics 11, no. 11: 1964. https://doi.org/10.3390/diagnostics11111964

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop