Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = nasopharyngeal carcinoma segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 907 KiB  
Systematic Review
Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis
by Chih-Keng Wang, Ting-Wei Wang, Ya-Xuan Yang and Yu-Te Wu
Bioengineering 2024, 11(5), 504; https://doi.org/10.3390/bioengineering11050504 - 17 May 2024
Cited by 2 | Viewed by 2836
Abstract
Nasopharyngeal carcinoma is a significant health challenge that is particularly prevalent in Southeast Asia and North Africa. MRI is the preferred diagnostic tool for NPC due to its superior soft tissue contrast. The accurate segmentation of NPC in MRI is crucial for effective [...] Read more.
Nasopharyngeal carcinoma is a significant health challenge that is particularly prevalent in Southeast Asia and North Africa. MRI is the preferred diagnostic tool for NPC due to its superior soft tissue contrast. The accurate segmentation of NPC in MRI is crucial for effective treatment planning and prognosis. We conducted a search across PubMed, Embase, and Web of Science from inception up to 20 March 2024, adhering to the PRISMA 2020 guidelines. Eligibility criteria focused on studies utilizing DL for NPC segmentation in adults via MRI. Data extraction and meta-analysis were conducted to evaluate the performance of DL models, primarily measured by Dice scores. We assessed methodological quality using the CLAIM and QUADAS-2 tools, and statistical analysis was performed using random effects models. The analysis incorporated 17 studies, demonstrating a pooled Dice score of 78% for DL models (95% confidence interval: 74% to 83%), indicating a moderate to high segmentation accuracy by DL models. Significant heterogeneity and publication bias were observed among the included studies. Our findings reveal that DL models, particularly convolutional neural networks, offer moderately accurate NPC segmentation in MRI. This advancement holds the potential for enhancing NPC management, necessitating further research toward integration into clinical practice. Full article
Show Figures

Figure 1

14 pages, 1016 KiB  
Article
SECP-Net: SE-Connection Pyramid Network for Segmentation of Organs at Risk with Nasopharyngeal Carcinoma
by Zexi Huang, Xin Yang, Sijuan Huang and Lihua Guo
Bioengineering 2023, 10(10), 1119; https://doi.org/10.3390/bioengineering10101119 - 24 Sep 2023
Cited by 3 | Viewed by 1729
Abstract
Nasopharyngeal carcinoma (NPC) is a kind of malignant tumor. The accurate and automatic segmentation of computed tomography (CT) images of organs at risk (OAR) is clinically significant. In recent years, deep learning models represented by U-Net have been widely applied in medical image [...] Read more.
Nasopharyngeal carcinoma (NPC) is a kind of malignant tumor. The accurate and automatic segmentation of computed tomography (CT) images of organs at risk (OAR) is clinically significant. In recent years, deep learning models represented by U-Net have been widely applied in medical image segmentation tasks, which can help to reduce doctors’ workload. In the OAR segmentation of NPC, the sizes of the OAR are variable, and some of their volumes are small. Traditional deep neural networks underperform in segmentation due to the insufficient use of global and multi-size information. Therefore, a new SE-Connection Pyramid Network (SECP-Net) is proposed. For extracting global and multi-size information, the SECP-Net designs an SE-connection module and a pyramid structure for improving the segmentation performance, especially that of small organs. SECP-Net also uses an auto-context cascaded structure to further refine the segmentation results. Comparative experiments are conducted between SECP-Net and other recent methods on a private dataset with CT images of the head and neck and a public liver dataset. Five-fold cross-validation is used to evaluate the performance based on two metrics; i.e., Dice and Jaccard similarity. The experimental results show that SECP-Net can achieve SOTA performance in these two challenging tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

14 pages, 2462 KiB  
Article
Lightweight Compound Scaling Network for Nasopharyngeal Carcinoma Segmentation from MR Images
by Yi Liu, Guanghui Han and Xiujian Liu
Sensors 2022, 22(15), 5875; https://doi.org/10.3390/s22155875 - 5 Aug 2022
Cited by 11 | Viewed by 2527
Abstract
Nasopharyngeal carcinoma (NPC) is a category of tumours with a high incidence in head-and-neck. To treat nasopharyngeal cancer, doctors invariably need to perform focal segmentation. However, manual segmentation is time consuming and laborious for doctors and the existing automatic segmentation methods require large [...] Read more.
Nasopharyngeal carcinoma (NPC) is a category of tumours with a high incidence in head-and-neck. To treat nasopharyngeal cancer, doctors invariably need to perform focal segmentation. However, manual segmentation is time consuming and laborious for doctors and the existing automatic segmentation methods require large computing resources, which makes some small and medium-sized hospitals unaffordable. To enable small and medium-sized hospitals with limited computational resources to run the model smoothly and improve the accuracy of structure, we propose a new LW-UNet network. The network utilises lightweight modules to form the Compound Scaling Encoder and combines the benefits of UNet to make the model both lightweight and accurate. Our model achieves a high accuracy with a Dice coefficient value of 0.813 with 3.55 M parameters and 7.51 G of FLOPs within 0.1 s (testing time in GPU), which is the best result compared with four other state-of-the-art models. Full article
Show Figures

Figure 1

15 pages, 7173 KiB  
Article
CAFS: An Attention-Based Co-Segmentation Semi-Supervised Method for Nasopharyngeal Carcinoma Segmentation
by Yitong Chen, Guanghui Han, Tianyu Lin and Xiujian Liu
Sensors 2022, 22(13), 5053; https://doi.org/10.3390/s22135053 - 5 Jul 2022
Cited by 1 | Viewed by 2929
Abstract
Accurate segmentation of nasopharyngeal carcinoma is essential to its treatment effect. However, there are several challenges in existing deep learning-based segmentation methods. First, the acquisition of labeled data are challenging. Second, the nasopharyngeal carcinoma is similar to the surrounding tissues. Third, the shape [...] Read more.
Accurate segmentation of nasopharyngeal carcinoma is essential to its treatment effect. However, there are several challenges in existing deep learning-based segmentation methods. First, the acquisition of labeled data are challenging. Second, the nasopharyngeal carcinoma is similar to the surrounding tissues. Third, the shape of nasopharyngeal carcinoma is complex. These challenges make the segmentation of nasopharyngeal carcinoma difficult. This paper proposes a novel semi-supervised method named CAFS for automatic segmentation of nasopharyngeal carcinoma. CAFS addresses the above challenges through three mechanisms: the teacher–student cooperative segmentation mechanism, the attention mechanism, and the feedback mechanism. CAFS can use only a small amount of labeled nasopharyngeal carcinoma data to segment the cancer region accurately. The average DSC value of CAFS is 0.8723 on the nasopharyngeal carcinoma segmentation task. Moreover, CAFS has outperformed the state-of-the-art nasopharyngeal carcinoma segmentation methods in the comparison experiment. Among the compared state-of-the-art methods, CAFS achieved the highest values of DSC, Jaccard, and precision. In particular, the DSC value of CAFS is 7.42% higher than the highest DSC value in the state-of-the-art methods. Full article
Show Figures

Figure 1

11 pages, 3587 KiB  
Article
Machine Learning Based on MRI DWI Radiomics Features for Prognostic Prediction in Nasopharyngeal Carcinoma
by Qiyi Hu, Guojie Wang, Xiaoyi Song, Jingjing Wan, Man Li, Fan Zhang, Qingling Chen, Xiaoling Cao, Shaolin Li and Ying Wang
Cancers 2022, 14(13), 3201; https://doi.org/10.3390/cancers14133201 - 30 Jun 2022
Cited by 16 | Viewed by 2868
Abstract
Purpose: This study aimed to explore the predictive efficacy of radiomics analyses based on readout-segmented echo-planar diffusion-weighted imaging (RESOLVE-DWI) for prognosis evaluation in nasopharyngeal carcinoma in order to provide further information for clinical decision making and intervention. Methods: A total of 154 patients [...] Read more.
Purpose: This study aimed to explore the predictive efficacy of radiomics analyses based on readout-segmented echo-planar diffusion-weighted imaging (RESOLVE-DWI) for prognosis evaluation in nasopharyngeal carcinoma in order to provide further information for clinical decision making and intervention. Methods: A total of 154 patients with untreated NPC confirmed by pathological examination were enrolled, and the pretreatment magnetic resonance image (MRI)—including diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) maps, T2-weighted imaging (T2WI), and contrast-enhanced T1-weighted imaging (CE-T1WI)—was collected. The Random Forest (RF) algorithm selected radiomics features and established the machine-learning models. Five models, namely model 1 (DWI + ADC), model 2 (T2WI + CE-T1WI), model 3 (DWI + ADC + T2WI), model 4 (DWI + ADC + CE-T1WI), and model 5 (DWI + ADC + T2WI + CE-T1WI), were constructed. The average area under the curve (AUC) of the validation set was determined in order to compare the predictive efficacy for prognosis evaluation. Results: After adjusting the parameters, the RF machine learning models based on extracted imaging features from different sequence combinations were obtained. The invalidation sets of model 1 (DWI + ADC) yielded the highest average AUC of 0.80 (95% CI: 0.79–0.81). The average AUCs of the model 2, 3, 4, and 5 invalidation sets were 0.72 (95% CI: 0.71–0.74), 0.66 (95% CI: 0.64–0.68), 0.74 (95% CI: 0.73–0.75), and 0.75 (95% CI: 0.74–0.76), respectively. Conclusion: A radiomics model derived from the MRI DWI of patients with nasopharyngeal carcinoma was generated in order to evaluate the risk of recurrence and metastasis. The model based on MRI DWI can provide an alternative approach for survival estimation, and can reveal more information for clinical decision-making and intervention. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

17 pages, 2738 KiB  
Article
DCNet: Densely Connected Deep Convolutional Encoder–Decoder Network for Nasopharyngeal Carcinoma Segmentation
by Yang Li, Guanghui Han and Xiujian Liu
Sensors 2021, 21(23), 7877; https://doi.org/10.3390/s21237877 - 26 Nov 2021
Cited by 12 | Viewed by 2821
Abstract
Nasopharyngeal Carcinoma segmentation in magnetic resonance imagery (MRI) is vital to radiotherapy. Exact dose delivery hinges on an accurate delineation of the gross tumor volume (GTV). However, the large-scale variation in tumor volume is intractable, and the performance of current models is mostly [...] Read more.
Nasopharyngeal Carcinoma segmentation in magnetic resonance imagery (MRI) is vital to radiotherapy. Exact dose delivery hinges on an accurate delineation of the gross tumor volume (GTV). However, the large-scale variation in tumor volume is intractable, and the performance of current models is mostly unsatisfactory with indistinguishable and blurred boundaries of segmentation results of tiny tumor volume. To address the problem, we propose a densely connected deep convolutional network consisting of an encoder network and a corresponding decoder network, which extracts high-level semantic features from different levels and uses low-level spatial features concurrently to obtain fine-grained segmented masks. Skip-connection architecture is involved and modified to propagate spatial information to the decoder network. Preliminary experiments are conducted on 30 patients. Experimental results show our model outperforms all baseline models, with improvements of 4.17%. An ablation study is performed, and the effectiveness of the novel loss function is validated. Full article
Show Figures

Figure 1

15 pages, 608 KiB  
Review
Head–Neck Cancer Delineation
by Enrico Antonio Lo Faso, Orazio Gambino and Roberto Pirrone
Appl. Sci. 2021, 11(6), 2721; https://doi.org/10.3390/app11062721 - 18 Mar 2021
Cited by 4 | Viewed by 2687
Abstract
Head–Neck Cancer (HNC) has a relevant impact on the oncology patient population and for this reason, the present review is dedicated to this type of neoplastic disease. In particular, a collection of methods aimed at tumor delineation is presented, because this is a [...] Read more.
Head–Neck Cancer (HNC) has a relevant impact on the oncology patient population and for this reason, the present review is dedicated to this type of neoplastic disease. In particular, a collection of methods aimed at tumor delineation is presented, because this is a fundamental task to perform efficient radiotherapy. Such a segmentation task is often performed on uni-modal data (usually Positron Emission Tomography (PET)) even though multi-modal images are preferred (PET-Computerized Tomography (CT)/PET-Magnetic Resonance (MR)). Datasets can be private or freely provided by online repositories on the web. The adopted techniques can belong to the well-known image processing/computer-vision algorithms or the newest deep learning/artificial intelligence approaches. All these aspects are analyzed in the present review and comparison among various approaches is performed. From the present review, the authors draw the conclusion that despite the encouraging results of computerized approaches, their performance is far from handmade tumor delineation result. Full article
(This article belongs to the Special Issue Advanced Image Analysis and Processing for Biomedical Applications)
Show Figures

Figure 1

Back to TopTop