Next Article in Journal
Machine Learning of Multi-Modal Tumor Imaging Reveals Trajectories of Response to Precision Treatment
Previous Article in Journal
Evaluating the RIST Molecular-Targeted Regimen in a Three-Dimensional Neuroblastoma Spheroid Cell Culture Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Systematic Review of Tumor Segmentation Strategies for Bone Metastases

by
Iromi R. Paranavithana
1,2,
David Stirling
1,*,
Montserrat Ros
1 and
Matthew Field
2,3,4
1
Faculty of Engineering and Information Sciences, School of Electrical, Computer and Telecommunications Engineering, University of Wollongong, Wollongong, NSW 2522, Australia
2
Ingham Institute for Applied Medical Research, Liverpool, NSW 2170, Australia
3
Southwestern Sydney Cancer Services, NSW Health, Sydney, NSW 2170, Australia
4
School of Clinical Medicine, Southwestern Sydney Clinical Campus, UNSW, Sydney, NSW 2170, Australia
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(6), 1750; https://doi.org/10.3390/cancers15061750
Submission received: 15 February 2023 / Revised: 9 March 2023 / Accepted: 10 March 2023 / Published: 14 March 2023
(This article belongs to the Section Systematic Review or Meta-Analysis in Cancer Research)

Abstract

:

Simple Summary

With recent progress in radiation therapy, patients with bone metastases can be treated curatively, provided precise delineation of metastatic lesions is adequately identified. Tumor segmentation is a highly active area of research, but only limited studies have been on bone metastasis. This review aims to investigate methods for differentiating benign from malignant bone lesions and characterizing malignant bone lesions specifically in the context of bone metastases. While computer vision techniques have opened new opportunities for quantifying cancer growth with minimal expert supervision, fully automatic segmentation algorithms still require improvement. This is partly due to limited contrast between tumors and surrounding tissue and the lack of a widely agreed upon “gold standard” for defining these boundaries. Additionally, many studies do not provide evidence that their proposed methods are suitable for use in clinical practice.

Abstract

Purpose: To investigate the segmentation approaches for bone metastases in differentiating benign from malignant bone lesions and characterizing malignant bone lesions. Method: The literature search was conducted in Scopus, PubMed, IEEE and MedLine, and Web of Science electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 77 original articles, 24 review articles, and 1 comparison paper published between January 2010 and March 2022 were included in the review. Results: The results showed that most studies used neural network-based approaches (58.44%) and CT-based imaging (50.65%) out of 77 original articles. However, the review highlights the lack of a gold standard for tumor boundaries and the need for manual correction of the segmentation output, which largely explains the absence of clinical translation studies. Moreover, only 19 studies (24.67%) specifically mentioned the feasibility of their proposed methods for use in clinical practice. Conclusion: Development of tumor segmentation techniques that combine anatomical information and metabolic activities is encouraging despite not having an optimal tumor segmentation method for all applications or can compensate for all the difficulties built into data limitations.

1. Introduction

Bone is one of the most common metastatic sites for cancer, especially in the lung, breast, and prostate [1]. This type of metastasis is often painful, with a high risk of mortality. The median survival rate of patients suffering from bone lesions metastasized from breast, prostate, and renal cancer ranges between 12 and 33 months, while survival is critically low for patients with primary lung cancer along with bone metastasis, ranging from 9.5% to 12% with one-year survival [1]. The level of bone metastasis is strongly linked with shorter survival rates [2]. Generally, these patients are treated with palliative chemotherapy and radiotherapy in clinical practice [3]. More recently, advances in image-guided radiotherapy techniques, such as stereotactic body radiotherapy (SBRT), have enabled the delivery of potentially ablative radiation doses while respecting healthy tissue constraints [4,5,6]. Furthermore, clinical trials, such as the SABR-COMET trial, have shown the benefits of SBRT for metastatic disease [7]. Effective treatment methods can improve overall survival and long-term progression-free survival [8]. Predictions of treatment response and feasibility can be improved by quantifying the number of metastatic lesions, their location, and the impact of radiomic biomarkers [9].
In medical image analysis, various modalities, such as positron emission tomography (PET) [10,11], whole-body Magnetic Resonance Imaging (MRI) [12], and bone scintigraphy [13], are used to support diagnoses and clinical follow-up. PET imaging offers functional details and is commonly used to evaluate cancer [14]. There are many advantages to using computed tomography (CT) in hybrid nuclear medicine equipment, such as attenuation correction and visual correlation between functional and anatomical images. Recent literature has shown that segmentation based on both CT and PET can determine the volume of interest (VOI) based on the anatomical contour [15]. Segmentation involves identifying the sets of pixels or voxels that form the tissue of interest [16]. Several reviews have been published reporting medical image segmentation methods, along with the strengths and weaknesses and discussing the challenges and outcomes [14,17,18,19,20,21,22,23,24,25,26,27,28]. A literature review by Sahiner et al. noted that establishing clinical significance is as important as establishing statistical significance for the research. Incorporating expert medical knowledge to optimize methods can provide benefits beyond adding extra layers to a Convolutional Neural Network (CNN) model and help radiologists accept the use of models [29]. Similarly, Zhang and Sejdić mentioned that although there are many applications for machine learning to help radiologists, they still cannot substitute for the clinician’s role due to existing limitations. One limitation is that many studies in radiology are based on supervised learning, and algorithms learn specific patterns based on radiologists’ decisions. Very few radiologists made these decisions in segmentation and were subject to varying degrees of inter-observer variability. Therefore, further investigations are needed to decide whether a machine can perform alone with 100% accuracy or at least match inter-observer variability [28]. In a review of deep learning segmentation for radiotherapy, Samarasinghe et al. noticed that clinical sites mostly used the U-net architecture and mostly the CT datasets from in-house data sources [30]. Further research contributions are needed to justify the use of algorithms in clinical decision-making to improve patient outcomes and translate them into clinical practice. It is difficult to interpret how models make decisions between input and output due to the significant number of parameters used. Acceptance of a model is improbable if medical experts cannot validate the approach and understand the logical bases of the method [19,28].
Faiella et al. [31] investigated the potential role of radiomics as a decision-supporting tool for predicting bone disease status, distinguishing benign from malignant bone lesions, and characterizing malignant lesions at the genetic level, considering only CT and MRI imaging. Their study is the first that we found while reviewing articles on bone metastases in the radiomics aspect. However, the paper lacks a discussion on segmentation techniques used in the radiomics approach. To date, no review has presented bone metastasis segmentation approaches and tumor segmentation with PET, CT, or PET/CT in radiation therapy to our knowledge. The main objective of this review was to present an overview of the latest research on cancer segmentation and bone metastasis segmentation on radiology images in the context of radiation therapy planning and to analyze and compare with state-of-the-art techniques in computer vision.

2. Methods

2.1. Literature Search

The systematic review followed the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 (PRISMA) [32]. The search was conducted in Scopus, PubMed, IEEE and MedLine, and the Web of Science electronic databases for publication date between 2010 and 2022. In addition, we used a Google search to identify additional records. The reference list of the included articles was cross-checked to identify any additional articles. All original studies published in English, with full text available, reporting bone metastasis segmentation or tumor segmentation for radiotherapy patients with oligometastatic disease, were included. The studies that contained segmentation as a part of their work were included. The studies that used CT, PET, and PET/CT were included. The studies that used bone scans could not be excluded because bone scans are primarily used to detect metastasis, as they appear as a hotspot on a bone scan. The studies, including medical implants, virtual clinical trials, image registration, MRI, and PET/MRI studies, were excluded. The awareness services for each issue of the journal were excluded. When several versions of the same articles were presented, the latest version was cited.
The databases were searched on 1 April 2022. The query was designed to include all studies that contained one or more words from four groups, one group comprised of words associated with bones (bone and bones), the second group comprised with the words associated with metastasis cancer (metastasis, metastases, metastatic, cancer, cancers, tumor, tumors, tumour, tumours, oligometastatic disease), the third group comprised with the words associated with radiotherapy (radiation oncology, radiation therapy, radiotherapy), and the fourth group with the term segmentation.
The complete search query used in the Scopus database was therefore:
“ALL (“bone and bones” OR “bone” OR “bones”) AND ALL (metastasis OR metastase-sor AND metastatic OR cancer OR cancers OR tumor OR tumors OR tumour OR tumours OR “oligometastatic disease”) AND ALL (“radiation oncology” OR “radiation therapy” OR radiotherapy) AND ALL (segmentation) AND PUBYEAR > 2009 AND PUBYEAR < 2023”.
An equivalent query with the same keywords was used in other databases. We used the following search query to identify the additional studies using Google search:
  • “bone metastasis segmentation”.
After excluding duplicate articles and assessing the remaining articles for eligibility based on their title and abstract, only relevant publications proceeded to full-text screening. The first author (I.R.P.) performed the screening, and the second author (M.F.) reviewed the screening.

2.2. Data Extraction

The outcomes of interest were the segmentation approaches used for cancerous tumors and bone metastasis segmentation. Data were extracted with regard to the following:
  • Enrollment period of the patients;
  • Study type: retrospective cohort study or prospective;
  • Study population. Extracted the number of scans or images when patient numbers were not provided;
  • Training/Validation/testing cohorts;
  • Primary tumor and relevant location;
  • Imaging modality;
  • Methodology;
  • Outcome;
  • Evaluation Metrics;
  • Details of whether the study mentioned the suitability of the approaches for clinical use;
  • Country of the Authors.

3. Results

A flow diagram of the literature selection process is presented in Figure 1. We conducted a comprehensive literature search, using both databases and Google searches to identify relevant studies on bone metastasis segmentation. A total of 2513 articles were identified through the initial database search, with an additional 302 papers found through alternative sources not included in the initial search. After removing duplicates, 2524 records were screened based on their titles and abstracts. Of these, 2367 records were excluded due to inclusion/exclusion criteria, leaving 157 full-text articles for further inspection.
After a detailed assessment of the full-text articles, 55 were excluded due to incomplete information or not meeting the inclusion/exclusion criteria. The most common reasons for exclusion were articles related to image registration, medical implants, and MRI or PET/MR studies. Finally, we included 102 full-text articles in our systematic review, with 24 review articles and 1 comparison study article focusing on techniques and technologies used in medical imaging for image analysis and segmentation. These papers provided background information for our study, with the remaining 77 original studies being the focus of our analysis. The categorization of the included articles is summarized in Table 1.
Of these original studies, 18 included segmentation tasks on cancer metastasis, which accounted for 23.37%. We focused on the segmentation of metastasis and other areas separately to weigh the effort given to metastasis in past years. The number of studies that performed segmentation of OARs/organs, tumor, and Target Volumes/OARs, along with target volumes, were 22, 27, and 4, respectively, yielding 53 studies. The increasing pace of publications in tumor segmentation was observable in recent years.
Figure 2a shows that most of the papers used CT (50.65%, of 77 original works), while Figure 2b shows that 58.44% of papers included in our review were based on deep learning techniques, with the remaining papers using thresholding, classification, clustering, statistical, atlas-based, and region-based techniques. Figure 2c shows the number of papers over time, with a dramatic increase in the number of publications in 2020 and 2022.
We found that there was a lack of consistency in performance evaluation metrics, making cross-evaluation of segmentation approaches difficult (Figure 2d). As shown in Figure 2e, 27.27% (21 articles) of papers had first authors from China, while 16.88% (13 articles) had first authors from the United States. Collaborative research across multiple countries is crucial for advancing scientific knowledge and developing effective solutions to global challenges. However, our analysis of 77 original articles revealed that only a small proportion (18.18%) involved collaboration among authors from two [37,55,63,65,71,75,78,81,84,85,105] or three [70,91,92] different countries. This suggests that there is still a lack of international collaboration and data sharing in the field. It is important to encourage and facilitate such collaborations to foster the exchange of ideas, resources, and expertise, ultimately leading to more impactful research outcomes. Lung cancer was the most commonly used primary cancer type in the studies, with 21 articles, followed by prostate cancer at 11 (Figure 2f). However, 15 articles did not report the primary cancer type. This review presents different methods and approaches for the tumor segmentation problem, but not all of these methods have been rigorously tested in real-world clinical settings or validated against accepted standards or benchmarks. As a result, many proposed methods did not present sufficient evidence to demonstrate their suitability for widespread clinical use. Of 77 original articles, only 19 articles (24.67%) reported the feasibility of using their methods in a clinical setting yet required further study on the matter. The data were extracted from all original articles included in Supplementary Table S1.

4. Discussion

Several review papers in the present literature discussed deep learning, machine learning, and other techniques separately on PET, CT, PET/CT, or bone scintigraphy images [19,29]. Some focused only on an imaging type [14,18]. No individual article focused on a combination of different types of techniques applied to multiple imaging modalities. Existing reviews of segmentation approaches have focused on the concept of radiomics and identified some of the promising avenues for the future, both in terms of applications and technical innovations. However, these studies did not focus on whether the existing methods were feasible for use in clinical practice. The following sections discuss the broad approaches used for the segmentation.

4.1. Deep Learning

Most of the papers in the review used either CNN or U-Net (a modification of CNN) as the strategy for studies conducted with deep learning. CNNs were mostly used for applications such as identification, diagnosis, classification, and segmentation of bone metastasis [13,24,37,47,52,95,99,107,108], identification of critical regions associated with toxicities after liver SBRT [92], tumor co-segmentation [74,78], radiation dose calculations [112], and OAR segmentation [86,106].
Smaller datasets in deep learning can result in overfitting, and high-quality patient data are crucial to reduce bias in clinical practice. However, there are privacy and ethical concerns in handling medical data, and a lack of labeled data to train deep learning algorithms, making manual labeling expensive and requiring expertise from physicians. This task is also prone to uncertainty when physicians label multiple classes per lesion [56]. As a solution to this, Lin et al. [44] developed a single-photon emission computerized tomography (SPECT) image annotation system in their work based on the openly available tool LabelMe released by MIT (http://labelme.csail.mit.edu/Release3.0/) [113] for manual labeling of SPECT images, which has a low spatial resolution. Apiparakoon et al. [56] used a semi-supervised learning method, the Ladder Feature Pyramid Network (LFPN), which incorporates an autoencoder structure in the ladder network to self-train using the unlabeled data. Even though LFPN achieves a slightly lower F1-score alone than self-training, models with self-training require twice the training time than the semi-supervised approach. Some studies have also suggested pretraining the model with unlabeled data from related datasets to overcome the lack of labeled data [45,114].
Augmentation is another approach to addressing data limitations. Apiparakoon et al. [56] augmented a dataset by changing the light, contrast, and brightness to ensure consistency with the physician’s process. Several other augmentation techniques have been employed, including rescaling [13,33,39,43,44,45,54,58,59,60,98,107], rotation [13,33,39,43,44,45,54,58,59,60,77,98,107], zooming [13,44,107], shifting intensity [58,77], reflecting horizontally [77], translating the image [39,43,77], cropping [59], applying elastic deformations [45], gamma augmentation [45], and flipping [13,44,54,59,107]. Da Cruz et al. [60] further applied a probabilistic Gaussian blur and linear contrast filters to augment the dataset. Furthermore, Zhang et al. [77] used advanced augmentation methods: Mixup data augmentation, random erase operation, CutMix, and Mosaic method. Mixup [115] generates additional samples during the training process by convexly combining random pairs of images and their associated labels. The authors utilized Mixup to deal with significant memory loss and the network’s inadequate sensitivity to the symmetry of GANs. A random erase operation was performed on the data prior to the backbone network to prevent overfitting. All portions and locations erased are random for each round of training, with the erased section considered either a blocked or distorted portion processed by filling pixels with fixed color or filling with the mean of the RGB channel of all pixels. CutMix [116] was used to cut and paste the lesion areas to other background areas to improve learning of lesion features and to help learn positive features within unbalanced samples. The Mosaic [117] method employs several images simultaneously and can enrich the discovered objects’ backgrounds.
Transfer learning is another strategy that authors use to deal with limited data [77,84,108]. Most studies use datasets from the same source(s) for training and testing, so generalizability is not well studied. Feng et al. [90] discovered that a DCNN model trained on a public dataset performed poorly on the institution’s data due to differences in clinical practice. Retraining with local cases improved performance, with retraining from scratch being slightly more effective than transfer learning. No advantage was observed in collecting more training data for poor performance. In contrast, Protonotarios et al. [71] introduced a dynamic information fusion scheme by applying a few-shot learning (FSL) framework. The FSL approach built a user-centric model of re-training that constantly improves with end-user feedback. During deployment, the end user may assess the model’s outputs, and, if erroneous, they may correct them.
Han, Oh, and Lee [41] proposed two CNN architectures: (1) whole body-based (WB) and (2) tandem architectures using the whole-body bone scan and local 256 × 512 patches, followed by a final fully connected deep neural network for integrating global (i.e., whole-body) and local (i.e., patches) information, named “global-local unified emphasis” (GLUE), and both were trained on limited data. Compared with classical 2D CNN models, this model results in a higher performance on limited data in this case [41].
A limitation of the model-based approaches is potential biases in small datasets, such as studying gender-biased data [47]. Insufficient data to train the models was an issue in most of the studies in this review [13,39,44,45,47,65,90,105,106,107]. Lu et al., 2020 [66] adopted a strategy of transforming the image segmentation issue into a pixel-wise classification issue. Each pixel is regarded as an independent training sample during network training, increasing the sample size significantly. Furthermore, the authors promoted the high efficiency of network training and reduced overfitting simultaneously by applying Adam stochastic optimization [118] and batch normalization [119]. In various tasks, initializations of network weights are usually from models (such as VGG19) trained with the ImageNet dataset. Sartor et al. [106] indicated that more data and consistently annotated data were needed for their model to achieve higher CNN overlap and enable future clinical implementation. Additionally, Lou et al. [111] reported that their study could not account for all biases due to population heterogeneity in their datasets (due to clinical stage, radiation dose, CT scanners, and motion management) and the limited size of the independent validation cohort. Song et al. [50] found that noisy CT images caused false positive classifications of bone metastasis, and some areas of the lesion could not be detected. The authors suggest that building a 3D voxel detector may eliminate these issues.
Two papers in this review focused on automatic segmentation for treatment response [45] and treatment planning [89] for metastatic lesions. Moreau et al. [45] compared two methods for bone lesion segmentation in metastatic breast cancer based on the nn-Unet [120] architecture: (1) use of lesion annotations with PET and CT images as 2-channel input; (2) use of both the reference bone and lesion masks as ground truth. The use of bone masks improved precision and slightly improved the Dice score for bone lesion segmentation. Moreau et al. [45] also proposed two nn-Unet segmentation models to compute imaging biomarkers for treatment response from baseline and follow-up images. When manually segmenting or assessing treatment responses, experts usually look at both baseline and follow-up acquisitions to decide the patients’ responses. Therefore, two input channels, baseline PET images and lesion segmentation on the baseline PET, were added to the follow-up network. Four imaging biomarkers were computed from the manual and automatic segmentations, and these produced promising results for predicting the treatment response. Improved results can be obtained using multimodal imaging modalities like PET/CT [36]. Arends et al. [89] showed that automatic vertebral body delineation using CNN was of high quality, which can save time in a clinical radiotherapy workflow.
Deep neural networks are based on complex, inter-connected hierarchical representations of the training data; however, interpreting these representations is quite demanding [107]. While interpretability needs to be enhanced, the research community should further investigate how to measure sensitivity and visualize features. Model transparency and interpretability are important to explain the model, understand the value, and ensure the robustness of the findings. For instance, Apiparakoon et al. [56] stated that they extracted global features from the core network, but the features were not mentioned in the study. This makes it difficult to detect what the model focuses on and to provide explanations of why the model makes its categorizations. The generalizability of these methods also requires further evaluation to embed them in clinical decision support systems.

4.2. Thresholding

Thresholding is a simple segmentation technique that focuses on converting a gray-level image to a binary image by defining all the voxels greater (or lower) than a given value to be foreground and the remaining to be background [14]. Various types of thresholding methods, including fixed, iterative, adaptive, and regional, are used by authors for different applications, such as tumor segmentation, OAR segmentation, detection of increased uptake regions in bone scintigraphy, quantification of bone metastasis, and detection of bone lesions. Thresholding-based segmentation on PET/CT images is on the Hounsfield Unit (HU) and the Standard Uptake Value (SUV). A CT image voxel is in HU, which has a scale range between −1000 and approximately 30,000 [121]. DICOM stores the pixel values of images in 12- to 16-bit formats. CT threshold segmentation can target high-density regions, such as bone [122]. In contrast, image segmentation on PET scans based on thresholding employs intensity probabilities using image histograms. SUV is a normalized semiquantitative parameter that can be derived using the intensity of PET images and DICOM metadata, including acquisition time and dose of the radiotracer. The SUV is then used for image segmentation [14].
The thresholding-based papers reviewed in this article were used for detection, segmentation and quantification of bone, bone metastasis, and bone lesions [40,68,70,82,96,100]. Detection involves localizing organs, landmarks, or lesions in medical images [19,70,100] whereas segmentation is aimed at obtaining detailed boundaries of the structures [68,82,96,100]. Quantification of detected lesions or metastasis focuses on extracting features for further analysis. For example, total bone metastasis was quantified using total bone metastasis volume, percentage of affected bone tissue, SUVmean, and SUVmax in the affected tissue, Z-transformed deviation of SUV in the affected tissue from average SUV in nonaffected tissue, and total metastasis count [40].
Some authors used hybrid methods by combining thresholding with other methods, such as flood filling algorithms [82] and graph cut algorithms [96]. Fränzle et al. [82] built a fully automated shape model positioning for bone segmentation in whole-body CT scans using fixed thresholding for skeleton segmentation and a flood filling algorithm for segmentation of the medullary cavities inside the skeleton. The proposed method provides all the information needed for the automatic selection and initialization of a statistical shape model for long bone segmentation. Nguyen et al. [96] suggested a framework for segmenting spinal marrow compartments from full-body joint PET/CT scans acquired after bone marrow transplantation. It included three main components: full body graph cut segmentation, spinal column vertebral body segmentation, and cancellous region extraction.
The main limitations of studies that use thresholding for PET imaging involve low resolution with high contrast, the large variability of pathologies, inherent noise, and high uncertainties in fuzzy object boundaries. There is no consensus on the selection of an SUV threshold [14]. Tsujimoto et al. [100] showed that improvements can be made in setting the threshold values, especially by analyzing the feasibility of other thresholding techniques and threshold derivation algorithms. Hammes et al. [40] found that the HU threshold had no significant influence, whereas an SUV threshold of 2.5 proved optimal for automated lesion quantification. Lesions with intense tracer uptake might lead to errors in estimations of the total affected bone volume because that area might exceed the true anatomic borders of the lesion, causing overestimation of the affected bone volume. Moussallem et al. [68] identified that the main difficulty limiting the segmentation of lung tumors by PET/CT images is respiratory motion. The partial volume effects related to the resolution of the PET/CT scan and the motion can cause inaccuracies for small lesions. To increase measurement accuracy, further studies should consider respiratory movements (using new and more accurate PET/CT devices) and lesion sizes. Nguyen et al. [96] found a need for interpolation at the boundaries of the segmented marrow compartment to account for the physical size difference between voxels in the PET and CT modalities. Clinical practice is limited to a small number of manually delineated ROI in 18F-fluoro-L- deosythymidine SUV measurement. Moreover, thresholding is not the best method for detecting the boundaries of these lesions. Therefore, Perk et al. [70] suggested a statistically optimized regional thresholding (SORT) method for bone lesion detection in 18F-NaF PET/CT imaging. Some patients appeared to have higher healthy bone uptake levels; therefore, the false positive rate in such patients may be elevated.

4.3. Clustering/Classification

Classification is a supervised learning technique aimed at partitioning a feature space derived from an image using labels provided for training. Clustering methods group the feature space into regions or proposed classes without labels. These techniques generally do not incorporate spatial information unless it is included in the feature space derivation. Examples of classification and clustering methods used to segment tumors include Random Forest (RF) [2,87,103], Support Vector Machines (SVM) [51,55,63,103,109], fuzzy clustering [101], Decision Tree (DT) [63,67], K-nearest neighbors (KNN) [67], K-means [73], parallelepiped classification [38], and Fuzzy C-Mean (FCM) [69].
Two studies derived useful information regarding the classification process and the ground truth values. Chu et al. [2] developed an RF classifier to segment tumors on bone scans using intensity and context features aimed at addressing areas prone to false positives and found that context features played a critical role. Furthermore, their study performed well in areas where tumors and high-intensity non-tumors were in close proximity, which could be due to the restrictiveness of a rule-based approach compared to a learning-based approach. Markel et al. [67] addressed the challenge of determining ground truth when validating the image segmentation method. They used the simultaneous truth and performance level estimation (STAPLE) algorithm to combine the GTVs into probabilistic maps for each patient. The results showed that all of the algorithms they tested performed better with respect to the test data, as opposed to the training data, which is indicative of a more reliable ground truth. They also showed that the use of texture features within PET/CT images was a promising approach for target delineation in radiotherapy of the lung. Hinzpeter et al. [42] conducted a proof-of-concept study to investigate whether the radiomics of CT image data enables the differentiation of bone metastases using 68 Ga-PSMA PET imaging as a reference standard. The trained gradient-boosted tree achieved an accuracy of 0.9 when applied to its original, non-augmented dataset.
Some authors used hybrid methods, such as combining SVM with either wavelet transform, Naïve Bayes, or DT [55,109]. AbuBaker and Ghadi proposed a novel algorithm for the detection and enhancement of cancerous nodules in CT images using SVM and wavelet transform. The use of both wavelet and SVM features reduced the predicted false positive regions in the processed CT images in their study. Hussain et al. [63] presented an automated lung cancer detection system based on multimodal features, such as texture, morphological, entropy-based, scale-invariant Fourier transform (SIFT), and Ellipse Fourier Descriptors (EFDs), using machine learning techniques, such as SVM, Naive Bayes, and DT. Wiese et al. [51] detected sclerotic bone metastasis in the spine using watershed algorithm and SVM. Complexity due to the heterogeneity, less isolation, and additional lesions in the single clinical case was addressed by training SVM on 3D features and imposing additional constraints (overlap and intensity) during the merge into three dimensions. The proposed model could increase the sensitivity in the initial detection of sclerotic metastatic lesions in the spine and in the assessment of bone tumor burden in cases of known sclerotic bone metastasis.
The drawbacks in these studies can be found throughout the segmentation process. Markel et al. [67] identified that in the preprocessing stage of segmenting lung cancer, a tumor may present a necrotic core with a low uptake, which resulted in small cavities in the segmentation. As a solution, they introduced a fill procedure in the post-processing step, and this worked well in the segmentation, as segmentation is a closed shape. Furthermore, they suggested incorporating 4D-PET images to better coincide with gated CT images to reduce motion blurring. Naqiuddin et al. [69] segmented CT images into bone, brain, and tumor regions using an FCM algorithm. They identified a sample size issue and gender bias in their dataset. Generally, fuzzy clustering techniques exclude spatial information when assigning associations to individual data even though it performs well in categorizing heterogenous data. This can be an issue, as medical images present a high degree of spatial correlation between tissues and the technique is sensitive to noise. This issue was also addressed by Slattery [101] using an additional membership function to include spatial information. In the Wiese et al. [51] model, to detect sclerotic bone metastases, some lesions were missed due to the weaknesses of the watershed algorithm. The feature filter eliminates some true detections due to the intensity contrast of the lesion with the surrounding osseous material. After watershed segmentation, the authors implemented a merging routine as a solution to over-segmentation. Polan et al. [87] observed a limitation in segmenting tissues using RF. The trainable Weka segmentation (TWS) tool limits the optimization of the algorithm. As a result, the number of trees and leaf size of the classifier ensemble were not optimized. Further, the creation of voxel features for training and classification in TWS was limited to the minimum, maximum, mean, and variance of the region of voxels.

4.4. Statistical Methods

Various studies have employed statistical methods such as the Active Shape Model (ASM) [48], a Bayesian delineation framework [103], gradient based segmentation [57], geometrical shape model under Bayesian framework [102] and fuzzy Markov random field (MRF) model [62].
Rachmawati et al. [48] utilized ASM to segment the cancer metastasis of bone scan images of Indonesian patient data, which resulted in the shape estimation of each predefined region of the bone scan image. Sheen et al. [123] compared the fixed thresholding-based method with gradient-based edge detection to compare radiomic signatures and prediction models. The results showed that gradient-based edge detection derived significant radiomic features for the model.
Both Ninomiya et al. [103] and Guo et al. [62] used Bayesian models. Ninomiya et al. [103] used an anatomical features-based machine learning technique to develop a Bayesian delineation framework of Clinical Target Volumes (CTV) for prostate cancer. One of the drawbacks of the Bayesian approaches in this scenario is the localization of the CTV to place probabilistic atlases (PAs). This Bayesian framework did not work well when the CTVs were far from the average CTV position. The proposed framework, using anatomical-features-based machine learning (AF-ML), more accurately extracted the CTVs of prostate cancer. Additionally, Guo et al. [62] utilized a fuzzy MRF model to segment lung tumors on PET/CT images. Unlike the traditional fuzzy MRF model method, it utilizes a new joint posterior probabilistic model, which can effectively take advantage of both CT and PET image information for the identification and delineation of tumor volume. For lung tumors located near other tissues with similar intensities in PET and CT images, such as when they extend into the chest wall or the mediastinum, this method was able to achieve more effective tumor segmentation.
Geographical bias was an issue in some of the studies in this review. In Rachmawati et al. [48], training data from non-Indonesian patients improved generalizability. If the bone geometry of the bone scan image has too many variations across countries, it might degrade the accuracy of metastasis detection because bone geometry is strongly influenced by ethnicity [48,124]. In the study by Zhou et al. [109], under the deep learning section, they demonstrated geographical bias when their study was limited to only Asian patients.

4.5. Atlas-Based Approaches

Segmentation is often limited by the low contrast between adjacent tissues, but prior knowledge can improve this. A widely used method is to incorporate prior knowledge from a reference image called an atlas, which provides an estimate of an object’s position much like a map would describe the components of a geographical area, and it helps to distinguish adjacent objects of interest with similar features [16].
Some authors utilized multiple atlases [78,125], while others used a combination of different techniques with a specific atlas [84,91]. Hanaoka et al. [83] utilized multiple atlases registered to the target unseen volume by a novel landmark-guided diffeomorphic demons’ algorithm, segmenting the whole spine and pelvis in a CT image. One of the advantages of this algorithm is the diffeomorphism/invertibility of the deformation field. Invertibility is required if it is necessary to warp both image(s) and landmark(s). The deformation field for warping images is not the same as and is the inverted version of the field for warping landmarks. Furthermore, multi-atlas-based approaches used by Yusufaly et al. [97] may allow active bone marrow sparing in radiotherapy settings where PET/CT is unavailable. Although previous experience strongly suggests that active bone marrow sparing is causally related to a reduction in hematologic toxicity, more outcome data are required to conclusively verify the benefits of an atlas-based approach. Fritscher et al. [91] utilized an atlas-based segmentation approach in combination with label fusion to initialize a segmentation pipeline employing statistical appearance models and geodesic active contours. The proposed hybrid approach, Multi Atlas Based Segmentation (MABS), provided more accurate results within a clinically acceptable amount of time, even in the presence of noise and low image contrast. However, MABS lacks the ability to provide anatomically plausible segmentation results and partly also shows less accurate results near boundaries. Another hybrid approach by Ruiz-España et al. [88] was developed for the automatic segmentation of the vertebrae from CT images by combining two different segmentation methods using the level-set method and probabilistic atlas. However, the generalizability of this method to other structures for which clear anatomical feature points are less reliable needs to be investigated.
Several limitations were observed in the atlas-based approaches. The test dataset in the study of Hanaoka et al. [83] only included healthy spines or those with osteoporosis. Spines with scoliosis, lordosis, postsurgical changes, or bone metastasis were not included. Another problem is that spines with abnormal numbers of vertebrae were excluded from the dataset, even though such anatomical anomalies are quite common [78]. A typical problem for geodesic active contours (GAC) is leaking of the evolving contour into neighboring structures in the absence of strong boundaries, which can be avoided by combining GAC with InShape models. The downside of many model-based segmentation approaches is their susceptibility to local minima. This has been overcome using MABS as an initialization restricting the final optimization to a local search space where all quantitative tests were carried out using a leave-one-out cross validation [91]. To improve the accuracy and efficiency of atlas-based auto-segmentation methods, further implementation of and investigations into artificial intelligence with deep learning algorithms are needed. Further investigations into the feasibility of RT plans based on ABAS-generated contours for both CTV and OAR are also needed [64].

4.6. Region-Based Approaches

Region-based methods can be categorized into region-growing or graph-based methods, which consider homogeneity when determining the object boundaries [14]. The main assumption in region growing for segmentation is that the region of interest has nearly constant or slowly varying intensity values to satisfy the homogeneity requirement. This method incorporates spatial information along with the intensity, which is an advantage over thresholding. However, different homogeneity criteria and initial seed locations can affect the segmentation results and require tuning [14].
Region-growing methods, similar to thresholding, are sensitive to the noise in the image, and can lead to leakage. The method proposed by Yang et al. [76] addressed this issue by developing a lung tumor segmentation based on multi-scale template matching and region growing; however, the sample size limited the evaluation of the technique. Dong et al. [81] proposed a method combining mathematical morphology based on a labeling algorithm and Graph Cuts to segment vertebrae in 100 sliced images of 10 patients with bone metastasis. The proposed method outperforms the conventional graph cut method. For lung segmentation, Elsayed et al. [61] applied a region-growing technique to isolate the human body and then a threshold followed by the Hessian method for vascular tree segmentation. This precisely extracted nodule features. Several classifiers and their combination were applied to classify malignant or benign nodules.
Meanwhile, Graph Cuts have the advantage of realizing fast and accurate segmentation of the target with little intervention of radiologists, as this method utilizes both boundary and regional information [81].

5. Conclusions

This paper analyzed the literature on tumor segmentation approaches, with a focus on bone metastases. We found that the development of segmentation techniques that combine anatomical information and metabolic activities (e.g., PET/CT) shows encouraging results. However, the lack of a gold standard for tumor boundaries is a major hindrance to the acceptance of fully automatic segmentation. Most algorithms need manual correction of the segmentation output, which largely explains the absence of clinical translation studies. AI-based methods may be better suited as an assistant for the clinician to overcome the repetitive and time-consuming task of identifying and segmenting lesions while providing a measurement of whole-body tumor burden.
To fully understand the methods and algorithms that can be utilized to deliver proper treatment planning to individual patients, more comprehensive studies are required that have limited data biases. When developing AI-based methods, it is crucial to utilize an appropriate method as a baseline approach. The nn-Unet framework is a state-of-the-art deep learning method with the capability to automatically set hyper-parameters while considering factors such as input data features and memory consumption. Utilizing simple thresholding as a baseline method prior to utilizing more complex methods is also advisable as it provides an understanding of the images’ features and the ability to conduct independent experiments.
Open-source software, such as a 3D slicer, can be used for initial visualization and segmentation tasks, as it is a reliable platform for medical image analysis, visualization, and clinical support. It is beneficial for the research community to recreate and build upon previous findings in similar research areas with different datasets. Although researchers need to make their code available for this to occur, many of the reviewed papers lack readily available codes. It is recommended that code in future studies be made readily accessible in open-source formats.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/cancers15061750/s1, Table S1: Extracted data from the original research articles.

Author Contributions

Conceptualization, I.R.P., D.S., M.F. and M.R.; investigation, I.R.P.; data curation, I.R.P.; writing—original draft preparation, I.R.P.; writing—review and editing, D.S., M.F. and M.R.; supervision, D.S., M.F. and M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the “University Postgraduate Award” and “International Postgraduate Tuition Award” from the University of Wollongong, Australia and the “National Health and Medical Research Council program grant 2018–2022” APP1132471 and was partially funded by the NSW Government through the Cancer Institute NSW Early Career Researcher Fellowship: 2019/ECF004.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Svensson, E.; Christiansen, C.F.; Ulrichsen, S.P.; Rørth, M.R.; Sørensen, H.T. Survival after bone metastasis by primary cancer type: A Danish population-based cohort study. BMJ Open 2017, 7, e016022. [Google Scholar] [CrossRef] [PubMed]
  2. Chu, G.; Lo, P.; Ramakrishna, B.; Kim, H.; Morris, D.; Goldin, J.; Brown, M. Bone Tumor Segmentation on Bone Scans Using Context Information and Random Forests; Springer International Publishing: Cham, Switzerland, 2014. [Google Scholar]
  3. Peeters, S.T.H.; Van Limbergen, E.J.; Hendriks, L.E.L.; De Ruysscher, D. Radiation for Oligometastatic Lung Cancer in the Era of Immunotherapy: What Do We (Need to) Know? Cancers 2021, 13, 2132. [Google Scholar] [CrossRef]
  4. Zeng, K.L.; Tseng, C.L.; Soliman, H.; Weiss, Y.; Sahgal, A.; Myrehaug, S. Stereotactic body radiotherapy (SBRT) for oligometastatic spine metastases: An overview. Front. Oncol. 2019, 9, 337. [Google Scholar] [CrossRef] [Green Version]
  5. Spencer, K.L.; van der Velden, J.M.; Wong, E.; Seravalli, E.; Sahgal, A.; Chow, E.; Verlaan, J.J.; Verkooijen, H.M.; van der Linden, Y.M. Systematic Review of the Role of Stereotactic Radiotherapy for Bone Metastases. J. Natl. Cancer Inst. 2019, 111, 1023–1032. [Google Scholar] [CrossRef] [PubMed]
  6. Loi, M.; Nuyttens, J.J.; Desideri, I.; Greto, D.; Livi, L. Single-fraction radiotherapy (SFRT) for bone metastases: Patient selection and perspectives. Cancer Manag. Res. 2019, 11, 9397–9408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Palma, D.A.; Olson, R.; Harrow, S.; Gaede, S.; Louie, A.V.; Haasbeek, C.; Mulroy, L.; Lock, M.; Rodrigues, G.B.; Yaremko, B.P.; et al. Stereotactic Ablative Radiotherapy for the Comprehensive Treatment of Oligometastatic Cancers: Long-Term Results of the SABR-COMET Phase II Randomized Trial. J. Clin. Oncol. 2020, 38, 2830–2838. [Google Scholar] [CrossRef] [PubMed]
  8. De Ruysscher, D.; Wanders, R.; van Baardwijk, A.; Dingemans, A.M.; Reymen, B.; Houben, R.; Bootsma, G.; Pitz, C.; van Eijsden, L.; Geraedts, W.; et al. Radical treatment of non-small-cell lung cancer patients with synchronous oligometastases: Long-term results of a prospective phase II trial (Nct01282450). J. Thorac. Oncol. 2012, 7, 1547–1555. [Google Scholar] [CrossRef] [Green Version]
  9. Dercle, L.; Henry, T.; Carré, A.; Paragios, N.; Deutsch, E.; Robert, C. Reinventing radiation therapy with machine learning and imaging bio-markers (radiomics): State-of-the-art, challenges and perspectives. Methods 2020, 188, 44–60. [Google Scholar] [CrossRef]
  10. Speirs, C.K.; Grigsby, P.W.; Huang, J.; Thorstad, W.L.; Parikh, P.J.; Robinson, C.G.; Bradley, J.D. PET-based radiation therapy planning. PET Clin. 2015, 10, 27–44. [Google Scholar] [CrossRef]
  11. Lu, W.; Wang, J.; Zhang, H.H. Computerized PET/CT image analysis in the evaluation of tumour response to therapy. Br. J. Radiol. 2015, 88, 20140625. [Google Scholar] [CrossRef] [Green Version]
  12. Vergalasova, I.; Cai, J. A modern review of the uncertainties in volumetric imaging of respiratory-induced target motion in lung radiotherapy. Med. Phys. 2020, 47, e988–e1008. [Google Scholar] [CrossRef] [PubMed]
  13. Papandrianos, N.; Papageorgiou, E.; Anagnostis, A.; Papageorgiou, K. Bone metastasis classification using whole body images from prostate cancer patients based on convolutional neural networks application. PLoS ONE 2020, 15, e0237213. [Google Scholar] [CrossRef] [PubMed]
  14. Foster, B.; Bagci, U.; Mansoor, A.; Xu, Z.; Mollura, D.J. A review on segmentation of positron emission tomography images. Comput. Biol. Med. 2014, 50, 76–96. [Google Scholar] [CrossRef] [Green Version]
  15. Takahashi, M.E.S.; Mosci, C.; Souza, E.M.; Brunetto, S.Q.; de Souza, C.; Pericole, F.V.; Lorand-Metze, I.; Ramos, C.D. Computed tomography-based skeletal segmentation for quantitative PET metrics of bone involvement in multiple myeloma. Nucl. Med. Commun. 2020, 41, 377–382. [Google Scholar] [CrossRef] [PubMed]
  16. Bach Cuadra, M.; Favre, J.; Omoumi, P. Quantification in Musculoskeletal Imaging Using Computational Analysis and Machine Learning: Segmentation and Radiomics. Semin. Musculoskelet. Radiol. 2020, 24, 50–64. [Google Scholar] [CrossRef] [PubMed]
  17. Ambrosini, V.; Nicolini, S.; Caroli, P.; Nanni, C.; Massaro, A.; Marzola, M.C.; Rubello, D.; Fanti, S. PET/CT imaging in different types of lung cancer: An overview. Eur. J. Radiol. 2012, 81, 988–1001. [Google Scholar] [CrossRef]
  18. Carvalho, L.E.; Sobieranski, A.C.; von Wangenheim, A. 3D Segmentation Algorithms for Computerized Tomographic Imaging: A Systematic Literature Review. J. Digit. Imaging 2018, 31, 799–850. [Google Scholar] [CrossRef]
  19. Domingues, I.; Pereira, G.; Martins, P.; Duarte, H.; Santos, J.; Abreu, P.H. Using deep learning techniques in medical imaging: A systematic review of applications on CT and PET. Artif. Intell. Rev. 2020, 53, 4093–4160. [Google Scholar] [CrossRef]
  20. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [Green Version]
  21. Mansoor, A.; Bagci, U.; Foster, B.; Xu, Z.; Papadakis, G.Z.; Folio, L.R.; Udupa, J.K.; Mollura, D.J. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends. Radiographics 2015, 35, 1056–1076. [Google Scholar] [CrossRef] [Green Version]
  22. Punn, N.S.; Agarwal, S. Modality specific U-Net variants for biomedical image segmentation: A survey. Artif. Intell. Rev. 2022, 55, 5845–5889. [Google Scholar] [CrossRef] [PubMed]
  23. Saba, T. Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges. J. Infect. Public Health 2020, 13, 1274–1289. [Google Scholar] [CrossRef] [PubMed]
  24. Trevor Hastie, J.F.; Tibshirani, R. The Elements of Statistical Learning; Springer: New York, NY, USA, 2001. [Google Scholar] [CrossRef]
  25. van Timmeren, J.E.; Cester, D.; Tanadini-Lang, S.; Alkadhi, H.; Baessler, B. Radiomics in medical imaging—“how-to” guide and critical reflection. Insights Into Imaging 2020, 11, 91. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, H.; Zhou, Z.; Li, Y.; Chen, Z.; Lu, P.; Wang, W.; Liu, W.; Yu, L. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images. EJNMMI Res. 2017, 7, 11. [Google Scholar] [CrossRef] [Green Version]
  27. Yousefirizi, F.; Pierre, D.; Amyar, A.; Ruan, S.; Saboury, B.; Rahmim, A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging: Towards Radiophenomics. PET Clin. 2022, 17, 183–212. [Google Scholar] [CrossRef]
  28. Zhang, Z.; Sejdić, E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput. Biol. Med. 2019, 108, 354–370. [Google Scholar] [CrossRef] [Green Version]
  29. Sahiner, B.; Pezeshk, A.; Hadjiiski, L.M.; Wang, X.; Drukker, K.; Cha, K.H.; Summers, R.M.; Giger, M.L. Deep learning in medical imaging and radiation therapy. Med. Phys. 2019, 46, e1–e36. [Google Scholar] [CrossRef] [Green Version]
  30. Samarasinghe, G.; Jameson, M.; Vinod, S.; Field, M.; Dowling, J.; Sowmya, A.; Holloway, L. Deep learning for segmentation in radiation therapy planning: A review. J. Med. Imaging Radiat. Oncol. 2021, 65, 578–595. [Google Scholar] [CrossRef]
  31. Faiella, E.; Santucci, D.; Calabrese, A.; Russo, F.; Vadalà, G.; Zobel, B.B.; Soda, P.; Iannello, G.; de Felice, C.; Denaro, V. Artificial Intelligence in Bone Metastases: An MRI and CT Imaging Review. Int. J. Environ. Res. Public Health 2022, 19, 1880. [Google Scholar] [CrossRef]
  32. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  33. Wang, R.; Lei, T.; Cui, R.; Zhang, B.; Meng, H.; Nandi, A.K. Medical image segmentation using deep learning: A survey. IET Image Process. 2022, 16, 1243–1267. [Google Scholar] [CrossRef]
  34. MacManus, M.; Everitt, S. Treatment Planning for Radiation Therapy. PET Clin. 2018, 13, 43–57. [Google Scholar] [CrossRef]
  35. Yang, W.C.; Hsu, F.M.; Yang, P.C. Precision radiotherapy for non-small cell lung cancer. J. Biomed. Sci. 2020, 27, 82. [Google Scholar] [CrossRef] [PubMed]
  36. Orcajo-Rincon, J.; Muñoz-Langa, J.; Sepúlveda-Sánchez, J.M.; Fernández-Pérez, G.C.; Martínez, M.; Noriega-Álvarez, E.; Sanz-Viedma, S.; Vilanova, J.C.; Luna, A. Review of imaging techniques for evaluating morphological and functional responses to the treatment of bone metastases in prostate and breast cancer. Clin. Transl. Oncol. 2022, 24, 1290–1310. [Google Scholar] [CrossRef] [PubMed]
  37. Chmelik, J.; Jakubicek, R.; Walek, P.; Jan, J.; Ourednicek, P.; Lambert, L.; Amadori, E.; Gavelli, G. Deep convolutional neural network-based segmentation and classification of difficult to define metastatic spinal lesions in 3D CT data. Med. Image Anal. 2018, 49, 76–88. [Google Scholar] [CrossRef]
  38. Elfarra, F.-G.; Calin, M.A.; Parasca, S.V. Computer-aided detection of bone metastasis in bone scintigraphy images using parallelepiped classification method. Ann. Nucl. Med. 2019, 33, 866–874. [Google Scholar] [CrossRef]
  39. Guo, Y.; Lin, Q.; Zhao, S.; Li, T.; Cao, Y.; Man, Z.; Zeng, X. Automated detection of lung cancer-caused metastasis by classifying scintigraphic images using convolutional neural network with residual connection and hybrid attention mechanism. Insights Into Imaging 2022, 13, 24. [Google Scholar] [CrossRef]
  40. Hammes, J.; Täger, P.; Drzezga, A. EBONI: A Tool for Automated Quantification of Bone Metastasis Load in PSMA PET/CT. J. Nucl. Med. 2018, 59, 1070–1075. [Google Scholar] [CrossRef]
  41. Han, S.; Oh, J.S.; Lee, J.J. Diagnostic performance of deep learning models for detecting bone metastasis on whole-body bone scan in prostate cancer. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 585–595. [Google Scholar] [CrossRef]
  42. Hinzpeter, R.; Baumann, L.; Guggenberger, R.; Huellner, M.; Alkadhi, H.; Baessler, B. Radiomics for detecting prostate cancer bone metastases invisible in CT: A proof-of-concept study. Eur. Radiol. 2022, 32, 1823–1832. [Google Scholar] [CrossRef]
  43. Li, T.; Lin, Q.; Guo, Y.; Zhao, S.; Zeng, X.; Man, Z.; Cao, Y.; Hu, Y. Automated detection of skeletal metastasis of lung cancer with bone scans using convolutional nuclear network. Phys. Med. Biol. 2022, 67, 015004. [Google Scholar] [CrossRef]
  44. Lin, Q.; Luo, M.; Gao, R.; Li, T.; Zhengxing, M.; Cao, Y.; Wang, H. Deep learning based automatic segmentation of metastasis hotspots in thorax bone SPECT images. PLoS ONE 2020, 15, e0243253. [Google Scholar] [CrossRef] [PubMed]
  45. Moreau, N.; Rousseau, C.; Fourcade, C.; Santini, G.; Brennan, A.; Ferrer, L.; Lacombe, M.; Guillerminet, C.; Colombié, M.; Jézéquel, P.; et al. Automatic segmentation of metastatic breast cancer lesions on18f-fdg pet/ct longitudinal acquisitions for treatment response assessment. Cancers 2022, 14, 101. [Google Scholar] [CrossRef] [PubMed]
  46. Moreau, N.; Rousseau, C.; Fourcade, C.; Santini, G.; Ferrer, L.; Lacombe, M.; Guillerminet, C.; Campone, M.; Colombié, M.; Rubeaux, M.; et al. Deep learning approaches for bone and bone lesion segmentation on 18FDG PET/CT imaging in the context of metastatic breast cancer. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020. [Google Scholar]
  47. Papandrianos, N.; Papageorgiou, E.; Anagnostis, A. Development of Convolutional Neural Networks to identify bone metastasis for prostate cancer patients in bone scintigraphy. Ann. Nucl. Med. 2020, 34, 824–832. [Google Scholar] [CrossRef] [PubMed]
  48. Rachmawati, E.; Sumarna, F.R.; Jondri; Kartamihardja, A.H.S.; Achmad, A.; Shintawati, R. Bone Scan Image Segmentation based on Active Shape Model for Cancer Metastasis Detection. In Proceedings of the 2020 8th International Conference on Information and Communication Technology (ICoICT), Yogyakarta, Indonesia, 24–26 June 2020. [Google Scholar]
  49. Sato, S.; Lu, H.; Kim, H.; Murakami, S.; Ueno, M.; Terasawa, T.; Aoki, T. Enhancement of Bone Metastasis from CT Images Based on Salient Region Feature Registration. In Proceedings of the 2018 18th International Conference on Control, Automation and Systems (ICCAS), PyeongChang, Republic of Korea, 17–20 October 2018. [Google Scholar]
  50. Song, Y.; Lu, H.; Kim, H.; Murakami, S.; Ueno, M.; Terasawa, T.; Aoki, T. Segmentation of Bone Metastasis in CT Images Based on Modified HED. In Proceedings of the 2019 19th International Conference on Control, Automation and Systems (ICCAS 2019), Institute of Control, Robotics and Systems—ICROS, ICC Jeju, Jeju, Republic of Korea, 11 October–18 October 2019; pp. 812–815. [Google Scholar]
  51. Wiese, T.; Burns, J.; Jianhua, Y.; Summers, R.M. Computer-aided detection of sclerotic bone metastases in the spine using watershed algorithm and support vector machines. In Proceedings of the 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 152–155. [Google Scholar]
  52. Zhang, J.; Huang, M.; Deng, T.; Cao, Y.; Lin, Q. Bone metastasis segmentation based on Improved U-NET algorithm. J. Phys. Conf. Ser. 2021, 1848, 012027. [Google Scholar] [CrossRef]
  53. Hsieh, T.-C.; Liao, C.-W.; Lai, Y.-C.; Law, K.-M.; Chan, P.-K.; Kao, C.-H. Detection of Bone Metastases on Bone Scans through Image Classification with Contrastive Learning. J. Pers. Med. 2021, 11, 1248. [Google Scholar] [CrossRef]
  54. Liu, S.; Feng, M.; Qiao, T.; Cai, H.; Xu, K.; Yu, X.; Jiang, W.; Lv, Z.; Wang, Y.; Li, D. Deep Learning for the Automatic Diagnosis and Analysis of Bone Metastasis on Bone Scintigrams. Cancer Manag. Res. 2022, 14, 51–65. [Google Scholar] [CrossRef]
  55. AbuBaker, A.; Ghadi, Y. A novel CAD system to automatically detect cancerous lung nodules using wavelet transform and SVM. Int. J. Electr. Comput. Eng. 2020, 10, 4745–4751. [Google Scholar] [CrossRef]
  56. Apiparakoon, T.; Rakratchatakul, N.; Chantadisai, M.; Vutrapongwatana, U.; Kingpetch, K.; Sirisalipoch, S.; Rakvongthai, Y.; Chaiwatanarat, T.; Chuangsuwanich, E. MaligNet: Semisupervised Learning for Bone Lesion Instance Segmentation Using Bone Scintigraphy. IEEE Access 2020, 8, 27047–27066. [Google Scholar] [CrossRef]
  57. Biswas, B.; Ghosh, S.K.; Ghosh, A. A novel CT image segmentation algorithm using PCNN and Sobolev gradient methods in GPU frameworks. Pattern Anal. Appl. 2020, 23, 837–854. [Google Scholar] [CrossRef]
  58. Borrelli, P.; Góngora, J.L.L.; Kaboteh, R.; Ulén, J.; Enqvist, O.; Trägårdh, E.; Edenbrandt, L. Freely available convolutional neural network-based quantification of PET/CT lesions is associated with survival in patients with lung cancer. EJNMMI Phys. 2022, 9, 6. [Google Scholar] [CrossRef] [PubMed]
  59. Chang, C.Y.; Buckless, C.; Yeh, K.J.; Torriani, M. Automated detection and segmentation of sclerotic spinal lesions on body CTs using a deep convolutional neural network. Skelet. Radiol. 2022, 51, 391–399. [Google Scholar] [CrossRef] [PubMed]
  60. da Cruz, L.B.; Júnior, D.A.D.; Diniz, J.O.B.; Silva, A.C.; de Almeida, J.D.S.; de Paiva, A.C.; Gattass, M. Kidney tumor segmentation from computed tomography images using DeepLabv3+ 2.5D model. Expert Syst. Appl. 2022, 192, 116270. [Google Scholar] [CrossRef]
  61. Elsayed, O.; Mahar, K.; Kholief, M.; Khater, H.A. Automatic detection of the pulmonary nodules from CT images. In Proceedings of the 2015 SAI Intelligent Systems Conference (IntelliSys), London, UK, 10–11 November 2015. [Google Scholar]
  62. Guo, Y.; Feng, Y.; Sun, J.; Zhang, N.; Lin, W.; Sa, Y.; Wang, P. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model. Comput. Math. Methods Med. 2014, 2014, 401201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Hussain, L.; Rathore, S.; Abbasi, A.A.; Saeed, S. Automated Lung Cancer Detection Based on Multimodal Features Extracting Strategy Using Machine Learning Techniques; SPIE Medical Imaging: San Diego, CA, USA, 2019; Volume 10948. [Google Scholar]
  64. Kim, N.; Chang, J.S.; Kim, Y.B.; Kim, J.S. Atlas-based auto-segmentation for postoperative radiotherapy planning in endometrial and cervical cancers. Radiat. Oncol. 2020, 15, 106. [Google Scholar] [CrossRef]
  65. Li, L.; Zhao, X.; Lu, W.; Tan, S. Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing 2020, 392, 277–295. [Google Scholar] [CrossRef]
  66. Lu, Y.; Lin, J.; Chen, S.; He, H.; Cai, Y. Automatic Tumor Segmentation by Means of Deep Convolutional U-Net with Pre-Trained Encoder in PET Images. IEEE Access 2020, 8, 113636–113648. [Google Scholar] [CrossRef]
  67. Markel, D.; Caldwell, C.; Alasti, H.; Soliman, H.; Ung, Y.; Lee, J.; Sun, A. Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in 18-FDG PET/CT. Int. J. Mol. Imaging 2013, 2013, 980769. [Google Scholar] [CrossRef] [Green Version]
  68. Moussallem, M.; Valette, P.J.; Traverse-Glehen, A.; Houzard, C.; Jegou, C.; Giammarile, F. New strategy for automatic tumor segmentation by adaptive thresholding on PET/CT images. J. Appl. Clin. Med. Phys. 2012, 13, 3875. [Google Scholar] [CrossRef]
  69. Naqiuddin, M.; Sofia, N.N.; Isa, I.S.; Sulaiman, S.N.; Karim, N.K.A.; Shuaib, I.L. Lesion demarcation of CT-scan images using image processing technique. In Proceedings of the 2018 8th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia, 23–25 November 2018. [Google Scholar]
  70. Perk, T.; Chen, S.; Harmon, S.; Lin, C.; Bradshaw, T.; Perlman, S.; Liu, G.; Jeraj, R. A statistically optimized regional thresholding method (SORT) for bone lesion detection in 18F-NaF PET/CT imaging. Phys. Med. Biol. 2018, 63, 225018. [Google Scholar] [CrossRef]
  71. Protonotarios, N.E.; Katsamenis, I.; Sykiotis, S.; Dikaios, N.; Kastis, G.A.; Chatziioannou, S.N.; Metaxas, M.; Doulamis, N.; Doulamis, A. A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging. Biomed. Phys. Eng. Express 2022, 8, 025019. [Google Scholar] [CrossRef]
  72. Rao, C.; Pai, S.; Hadzic, I.; Zhovannik, I.; Bontempi, D.; Dekker, A.; Teuwen, J.; Traverso, A. Oropharyngeal Tumour Segmentation Using Ensemble 3D PET-CT Fusion Networks for the HECKTOR Challenge; Springer International Publishing: Cham, Switzerland, 2021. [Google Scholar]
  73. Sarker, P.; Shuvo, M.M.H.; Hossain, Z.; Hasan, S. Segmentation and classification of lung tumor from 3D CT image using K-means clustering algorithm. In Proceedings of the 2017 4th International Conference on Advances in Electrical Engineering (ICAEE), Dhaka, Bangladesh, 28–30 September 2017. [Google Scholar]
  74. Tian, H.; Xiang, D.; Zhu, W.; Shi, F.; Chen, X. Fully convolutional network with sparse feature-maps composition for automatic lung tumor segmentation from PET images. SPIE Med. Imaging 2020, 11313, 1131310. [Google Scholar]
  75. Xue, Z.; Li, P.; Zhang, L.; Lu, X.; Zhu, G.; Shen, P.; Shah, S.A.A.; Bennamoun, M. Multi-Modal Co-Learning for Liver Lesion Segmentation on PET-CT Images. IEEE Trans. Med. Imaging 2021, 40, 3531–3542. [Google Scholar] [CrossRef] [PubMed]
  76. Yang, B.; Xiang, D.; Yu, F.; Chen, X. Lung tumor segmentation based on the multi-scale template matching and region growing. SPIE Med. Imaging 2018, 10578, 105782Q. [Google Scholar]
  77. Zhang, Y.; He, S.; Wa, S.; Zong, Z.; Lin, J.; Fan, D.; Fu, J.; Lv, C. Symmetry GAN Detection Network: An Automatic One-Stage High-Accuracy Detection Network for Various Types of Lesions on CT Images. Symmetry 2022, 14, 234. [Google Scholar] [CrossRef]
  78. Zhao, X.; Li, L.; Lu, W.; Tan, S. Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network. Phys. Med. Biol. 2019, 64, 015011. [Google Scholar] [CrossRef] [PubMed]
  79. Chen, J.; Li, Y.; Luna, L.P.; Chung, H.W.; Rowe, S.P.; Du, Y.; Solnes, L.B.; Frey, E.C. Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks. Med. Phys. 2021, 48, 3860–3877. [Google Scholar] [CrossRef]
  80. Yousefirizi, F.; Rahmim, A. GAN-Based Bi-Modal Segmentation Using Mumford-Shah Loss: Application to Head and Neck Tumors in PET-CT Images; Springer International Publishing: Cham, Switzerland, 2021. [Google Scholar]
  81. Dong, R.; Lu, H.; Kim, H.; Aoki, T.; Zhao, Y.; Zhao, Y. An Interactive Technique of Fast Vertebral Segmentation for Computed Tomography Images with Bone Metastasis. In Proceedings of the 2nd International Conference on Biomedical Signal and Image Processing, Kitakyushu, Japan, 23–25 August 2017. [Google Scholar]
  82. Fränzle, A.; Sumkauskaite, M.; Hillengass, J.; Bäuerle, T.; Bendl, R. Fully automated shape model positioning for bone segmentation in whole-body CT scans. J. Phys. Conf. Ser. 2014, 489, 012029. [Google Scholar] [CrossRef] [Green Version]
  83. Hanaoka, S.; Masutani, Y.; Nemoto, M.; Nomura, Y.; Miki, S.; Yoshikawa, T.; Hayashi, N.; Ohtomo, K.; Shimizu, A. Landmark-guided diffeomorphic demons algorithm and its application to automatic segmentation of the whole spine and pelvis in CT images. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 413–430. [Google Scholar] [CrossRef]
  84. Hu, Q.; de F. Souza, L.F.; Holanda, G.B.; Alves, S.S.A.; dos S. Silva, F.H.; Han, T.; Rebouças Filho, P.P. An effective approach for CT lung segmentation using mask region-based convolutional neural networks. Artif. Intell. Med. 2020, 103, 101792. [Google Scholar] [CrossRef]
  85. Lindgren Belal, S.; Sadik, M.; Kaboteh, R.; Enqvist, O.; Ulén, J.; Poulsen, M.H.; Simonsen, J.; Høilund-Carlsen, P.F.; Edenbrandt, L.; Trägårdh, E. Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CT-based 3D quantification of skeletal metastases. Eur. J. Radiol. 2019, 113, 89–95. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  86. Noguchi, S.; Nishio, M.; Yakami, M.; Nakagomi, K.; Togashi, K. Bone segmentation on whole-body CT using convolutional neural network with novel data augmentation techniques. Comput. Biol. Med. 2020, 121, 103767. [Google Scholar] [CrossRef]
  87. Polan, D.F.; Brady, S.L.; Kaufman, R.A. Tissue segmentation of computed tomography images using a Random Forest algorithm: A feasibility study. Phys. Med. Biol. 2016, 61, 6553–6569. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  88. Ruiz-España, S.; Domingo, J.; Díaz-Parra, A.; Dura, E.; D’Ocón-Alcañiz, V.; Arana, E.; Moratal, D. Automatic segmentation of the spine by means of a probabilistic atlas with a special focus on ribs suppression. Med. Phys. 2017, 44, 4695–4707. [Google Scholar] [CrossRef] [PubMed]
  89. Arends, S.R.S.; Savenije, M.H.F.; Eppinga, W.S.C.; van der Velden, J.M.; van den Berg, C.A.T.; Verhoeff, J.J.C. Clinical utility of convolutional neural networks for treatment planning in radiotherapy for spinal metastases. Phys. Imaging Radiat. Oncol. 2022, 21, 42–47. [Google Scholar] [CrossRef] [PubMed]
  90. Feng, X.; Bernard, M.E.; Hunter, T.; Chen, Q. Improving accuracy and robustness of deep convolutional neural network based thoracic OAR segmentation. Phys. Med. Biol. 2020, 65, 07NT01. [Google Scholar] [CrossRef] [PubMed]
  91. Fritscher, K.D.; Peroni, M.; Zaffino, P.; Spadea, M.F.; Schubert, R.; Sharp, G. Automatic segmentation of head and neck CT images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours. Med. Phys. 2014, 41, 051910. [Google Scholar] [CrossRef] [PubMed]
  92. Ibragimov, B.; Toesca, D.A.S.; Chang, D.T.; Yuan, Y.; Koong, A.C.; Xing, L.; Vogelius, I.R. Deep learning for identification of critical regions associated with toxicities after liver stereotactic body radiation therapy. Med. Phys. 2020, 47, 3721–3731. [Google Scholar] [CrossRef]
  93. Lin, X.W.; Li, N.; Qi, Q. Organs-At-Risk Segmentation in Medical Imaging Based on the U-Net with Residual and Attention Mechanisms. In Proceedings of the 2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Haikou, China, 20–22 December 2021. [Google Scholar]
  94. Liu, Z.K.; Liu, X.; Xiao, B.; Wang, S.B.; Miao, Z.; Sun, Y.L.; Zhang, F.Q. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Phys. Med. 2020, 69, 184–191. [Google Scholar] [CrossRef] [Green Version]
  95. Nemoto, T.; Futakami, N.; Yagi, M.; Kumabe, A.; Takeda, A.; Kunieda, E.; Shigematsu, N. Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi. J. Radiat. Res. 2020, 61, 257–264. [Google Scholar] [CrossRef] [Green Version]
  96. Nguyen, C.T.; Havlicek, J.P.; Chakrabarty, J.H.; Duong, Q.; Vesely, S.K. Towards automatic 3D bone marrow segmentation. In Proceedings of the 2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), Santa Fe, NM, USA, 6–8 March 2016. [Google Scholar]
  97. Yusufaly, T.; Miller, A.; Medina-Palomo, A.; Williamson, C.W.; Nguyen, H.; Lowenstein, J.; Leath, C.A., III; Xiao, Y.; Moore, K.L.; Moxley, K.M.; et al. A Multi-atlas Approach for Active Bone Marrow Sparing Radiation Therapy: Implementation in the NRG-GY006 Trial. Int. J. Radiat. Oncol. Biol. Phys. 2020, 108, 1240–1247. [Google Scholar] [CrossRef]
  98. Xiong, X.; Smith, B.J.; Graves, S.A.; Sunderland, J.J.; Graham, M.M.; Gross, B.A.; Buatti, J.M.; Beichel, R.R. Quantification of uptake in pelvis F-18 FLT PET-CT images using a 3D localization and segmentation CNN. Med. Phys. 2022, 49, 1585–1598. [Google Scholar] [CrossRef] [PubMed]
  99. Nemoto, T.; Futakami, N.; Yagi, M.; Kunieda, E.; Akiba, T.; Takeda, A.; Shigematsu, N. Simple low-cost approaches to semantic segmentation in radiation therapy planning for prostate cancer using deep learning with non-contrast planning CT images. Phys. Med. 2020, 78, 93–100. [Google Scholar] [CrossRef] [PubMed]
  100. Tsujimoto, M.; Teramoto, A.; Ota, S.; Toyama, H.; Fujita, H. Automated segmentation and detection of increased uptake regions in bone scintigraphy using SPECT/CT images. Ann. Nucl. Med. 2018, 32, 182–190. [Google Scholar] [CrossRef] [PubMed]
  101. Slattery, A. Validating an image segmentation program devised for staging lymphoma. Australas. Phys. Eng. Sci. Med. 2017, 40, 799–809. [Google Scholar] [CrossRef]
  102. Martínez, F.; Romero, E.; Dréan, G.; Simon, A.; Haigron, P.; de Crevoisier, R.; Acosta, O. Segmentation of pelvic structures for planning CT using a geometrical shape model tuned by a multi-scale edge detector. Phys. Med. Biol. 2014, 59, 1471–1484. [Google Scholar] [CrossRef] [Green Version]
  103. Ninomiya, K.; Arimura, H.; Sasahara, M.; Hirose, T.; Ohga, S.; Umezu, Y.; Honda, H.; Sasaki, T. Bayesian delineation framework of clinical target volumes for prostate cancer radiotherapy using an anatomical-features-based machine learning technique. In Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling; Fei, B., Webster, R.J., Eds.; SPIE-Int. Soc. Optical Engineering: Bellingham, WA, USA, 2018. [Google Scholar]
  104. Men, K.; Dai, J.; Li, Y. Automatic segmentation of the clinical target volume and organs at risk in the planning CT for rectal cancer using deep dilated convolutional neural networks. Med. Phys. 2017, 44, 6377–6389. [Google Scholar] [CrossRef]
  105. Ding, Y.; Chen, Z.; Wang, Z.; Wang, X.; Hu, D.; Ma, P.; Ma, C.; Wei, W.; Li, X.; Xue, X.; et al. Three-dimensional deep neural network for automatic delineation of cervical cancer in planning computed tomography images. J. Appl. Clin. Med. Phys 2022, 23, e13566. [Google Scholar] [CrossRef]
  106. Sartor, H.; Minarik, D.; Enqvist, O.; Ulén, J.; Wittrup, A.; Bjurberg, M.; Trägårdh, E. Auto-segmentations by convolutional neural network in cervical and anorectal cancer with clinical structure sets as the ground truth. Clin. Transl. Radiat. Oncol. 2020, 25, 37–45. [Google Scholar] [CrossRef]
  107. Papandrianos, N.; Papageorgiou, E.; Anagnostis, A.; Feleki, A. A deep-learning approach for diagnosis of metastatic breast cancer in bones from whole-body scans. Appl. Sci. 2020, 10, 997. [Google Scholar] [CrossRef] [Green Version]
  108. Pi, Y.; Zhao, Z.; Xiang, Y.; Li, Y.; Cai, H.; Yi, Z. Automated diagnosis of bone metastasis based on multi-view bone scans using attention-augmented deep neural networks. Med. Image Anal. 2020, 65, 101784. [Google Scholar] [CrossRef] [PubMed]
  109. Zhou, H.; Dong, D.; Chen, B.; Fang, M.; Cheng, Y.; Gan, Y.; Zhang, R.; Zhang, L.; Zang, Y.; Liu, Z.; et al. Diagnosis of Distant Metastasis of Lung Cancer: Based on Clinical and Radiomic Features. Transl. Oncol. 2018, 11, 31–36. [Google Scholar] [CrossRef]
  110. Zhang, J.; Ma, G.; Cheng, J.; Song, S.; Zhang, Y.; Shi, L.Q. Diagnostic classification of solitary pulmonary nodules using support vector machine model based on 2-[18F]fluoro-2-deoxy-D-glucose PET/computed tomography texture features. Nucl. Med. Commun. 2020, 41, 560–566. [Google Scholar] [CrossRef] [PubMed]
  111. Lou, B.; Doken, S.; Zhuang, T.; Wingerter, D.; Gidwani, M.; Mistry, N.; Ladic, L.; Kamen, A.; Abazeed, M.E. An image-based deep learning framework for individualising radiotherapy dose: A retrospective analysis of outcome prediction. Lancet Digit. Health 2019, 1, e136–e147. [Google Scholar] [CrossRef] [Green Version]
  112. Mao, X.; Pineau, J.; Keyes, R.; Enger, S.A. RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy Via Deep Learning. Int. J. Radiat. Oncol. Biol. Phys. 2020, 108, 802–812. [Google Scholar] [CrossRef]
  113. LabelMe. LabelMe Annotation Tool. 2022. Available online: http://labelme2.csail.mit.edu/Release3.0/ (accessed on 12 July 2022).
  114. Alzubaidi, L.; Al-Amidie, M.; Al-Asadi, A.; Humaidi, A.J.; Al-Shamma, O.; Fadhel, M.A.; Zhang, J.; Santamaría, J.; Duan, Y. Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data. Cancers 2021, 13, 1590. [Google Scholar] [CrossRef] [PubMed]
  115. Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
  116. Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  117. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
  118. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  119. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015. [Google Scholar]
  120. Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  121. Mah, P.; Reeves, T.E.; McDavid, W.D. Deriving Hounsfield units using grey levels in cone beam computed tomography. Dentomaxillofac. Radiol. 2010, 39, 323–335. [Google Scholar] [CrossRef]
  122. Phan, A.-C.; Vo, V.-Q.; Phan, T.-C. A Hounsfield value-based approach for automatic recognition of brain haemorrhage. J. Inf. Telecommun. 2019, 3, 196–209. [Google Scholar] [CrossRef]
  123. Sheen, H.; Shin, H.-B.; Kim, J.Y. Comparison of radiomics prediction models for lung metastases according to four semiautomatic segmentation methods in soft-tissue sarcomas of the extremities. J. Korean Phys. Soc. 2022, 80, 247–256. [Google Scholar] [CrossRef]
  124. Horikoshi, H.; Kikuchi, A.; Onoguchi, M.; Sjöstrand, K.; Edenbrandt, L. Computer-aided diagnosis system for bone scintigrams from Japanese patients: Importance of training database. Ann. Nucl. Med. 2012, 26, 622–626. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Alarifi, A.; Alwadain, A. Computer-aided cancer classification system using a hybrid level-set image segmentation. Meas. J. Int. Meas. Confed. 2019, 148, 106864. [Google Scholar] [CrossRef]
Figure 1. Inclusion and exclusion of articles for the review.
Figure 1. Inclusion and exclusion of articles for the review.
Cancers 15 01750 g001
Figure 2. Analysis of characteristics of included articles. (a) Distribution of articles according to the image modality; (b) Distribution of articles according to the method; (c) Distribution of articles over years; (d) Distribution of evaluation metrics; (e) Countries of first authors; (f) Distribution of cancer type.
Figure 2. Analysis of characteristics of included articles. (a) Distribution of articles according to the image modality; (b) Distribution of articles according to the method; (c) Distribution of articles over years; (d) Distribution of evaluation metrics; (e) Countries of first authors; (f) Distribution of cancer type.
Cancers 15 01750 g002
Table 1. Papers Included in the Review.
Table 1. Papers Included in the Review.
Area of the StudyPurpose of the StudyReferenceNo of
Papers
Reviews/Comparison of methodsComputerized PET/CT Image Analysis in the Evaluation of Tumor[11]1
Machine learning techniques in medical imaging[19,20,22,27,28,29,33]7
Segmentation methods for Radiology image (s)[14,16,18,21,23]5
Radiation therapy treatments for metastases[4,5,6]3
Radiation therapy and planning[9,10,12,34,35]5
Metastases Segmentation[26]1
Imaging Techniques[17,36]2
Radiomics in medical imaging[25]1
SegmentationMetastases[37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]18
Tumor[2,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80]27
Organ(s)/Organs-at-Risk (OARs)[81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102]22
Target Volume/OARs + Target Volume[103,104,105,106]4
ClassificationMetastases[13,107,108,109]4
Tumor[110,111]2
Total102
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paranavithana, I.R.; Stirling, D.; Ros, M.; Field, M. Systematic Review of Tumor Segmentation Strategies for Bone Metastases. Cancers 2023, 15, 1750. https://doi.org/10.3390/cancers15061750

AMA Style

Paranavithana IR, Stirling D, Ros M, Field M. Systematic Review of Tumor Segmentation Strategies for Bone Metastases. Cancers. 2023; 15(6):1750. https://doi.org/10.3390/cancers15061750

Chicago/Turabian Style

Paranavithana, Iromi R., David Stirling, Montserrat Ros, and Matthew Field. 2023. "Systematic Review of Tumor Segmentation Strategies for Bone Metastases" Cancers 15, no. 6: 1750. https://doi.org/10.3390/cancers15061750

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop