Abstract
Artificial intelligence (AI) is an interdisciplinary field that encompasses a wide range of computer science disciplines, including image recognition, machine learning, human−computer interaction, robotics and so on. Recently, AI, especially deep learning algorithms, has shown excellent performance in the field of image recognition, being able to automatically perform quantitative evaluation of complex medical image features to improve diagnostic accuracy and efficiency. AI has a wider and deeper application in the medical field of diagnosis, treatment and prognosis. Nasopharyngeal carcinoma (NPC) occurs frequently in southern China and Southeast Asian countries and is the most common head and neck cancer in the region. Detecting and treating NPC early is crucial for a good prognosis. This paper describes the basic concepts of AI, including traditional machine learning and deep learning algorithms, and their clinical applications of detecting and assessing NPC lesions, facilitating treatment and predicting prognosis. The main limitations of current AI technologies are briefly described, including interpretability issues, privacy and security and the need for large amounts of annotated data. Finally, we discuss the remaining challenges and the promising future of using AI to diagnose and treat NPC.
1. Introduction
Nasopharyngeal carcinoma (NPC), an epithelial carcinoma developing in the nasopharynx mucosal, is often observed at the pharyngeal recess []. Diagnosing NPC involves an endoscopy followed by an endoscopic biopsy of the suspected site [,]. Endoscopic biopsy may miss small cancers located submucosally or laterally to the pharyngeal crypt, which presents significant diagnostic challenges. Early diagnosis of NPC is difficult because of the late onset of symptoms and special anatomical structure. In most cases, NPC patients are diagnosed late, resulting in poor prognoses []. Local control rates have reached 95% in early NPC cases owing to the swift advancement of imaging techniques and radiotherapy []. Advanced-stage patients still have dismal outcomes, while advanced radiotherapy techniques and chemotherapy strategies have improved NPC prognosis [,]. Thus, it would be interesting to know if artificial intelligence (AI) can improve the diagnosis, therapy and prognosis prediction of NPC.
AI is a subdiscipline of computer science that recognizes the nature of intelligence and creates a new type of intelligent machine that can exhibit human-like behaviors []. AI is utilized in many areas, including medicine, communication, transportation and finance, among others []. AI is mainly used for disease diagnosis, treatment and prognosis prediction in the medicine area. Medical AI has two major branches: virtual and physical []. The virtual part of AI is composed of deep learning (DL) and machine learning (ML), which offer a potential way to construct robust computer-assisted approaches. The physical part of AI encompasses robots and medical devices []. Several recent studies have shown that AI can improve early diagnosis efficiency as well as the prognosis of NPC patients, through its application in diagnosis and treatment [,,].
There are some reviews on the application of AI in NPC [,]. However, AI techniques are advancing so fast that it is necessary to update these reviews frequently. In this review, we analyze and summarize the research progress and clinical application of AI technologies in the diagnosis, treatment and prognosis prediction of NPC. We provide a complete picture of the current status of AI in the main clinical areas. We also study the state of the clinical implementation of AI and the effort needed to make progress in this area. We hope that this information will be helpful to both clinicians and researchers interested in the utilization of AI in the clinical care of NPC.
2. AI and Its Technologies
In the last decades, many medical imaging techniques have played a key role in the early detection, diagnosis and treatment of diseases, such as ultrasound, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission computed tomography (PET-CT) []. Recently, significant advances have been made in AI, which allows machines to automatically analyze and interpret complex data []. AI is frequently used in some medical fields like oncology, radiology and pathology, which require accurate and plentiful image data analysis. Physicians usually detect, describe and monitor head and neck diseases by visually assessing head and neck medical images. This assessment is often based on experience and can be subjective. In contrast to qualitative reasoning, AI can make quantitative assessments by automatically recognizing imaging information []. AI, including traditional ML and DL, enables physicians to make more accurate and faster imaging diagnoses and greatly reduces workload.
Traditional ML algorithms are one of the AI approaches in medical imaging, which heavily rely on the pre-defined engineering features. These are defined by mathematical equations (e.g., tumor texture) and thus can be quantified using computer programs. Features are entered into ML models to help physicians classify patients and make clinical decisions. Traditional ML includes a large number of established methods, such as k-nearest neighbors (KNN), support vector machines (SVM), random forests (RF) and so on. These methods are widely used in radiology to convert image data into feature vectors through image processing methods. Predictive models are built by using these vectors to derive certain information from the same image data and then generating traditional ML. Radiomics have been evaluated in some small retrospective studies, which attempt to predict tissue subtypes, response to certain treatments, prognosis and other information from medical images of tumors.
DL, as a subset of ML, is based on a neural network structure inspired by the human brain. ML models must define and extract features from images and their performance depends on the quality of the features. In contrast, DL algorithms do not have to define features in advance []. They can automatically learn features and perform image classification and task processing. This data-driven model is more informative and practical. DL algorithms commonly used in medical image analysis and processing include the artificial neural network (ANN), deep neural network (DNN), convolutional neural network (CNN) and recurrent neural network (RNN). Currently, CNN is the most popular type of DL architecture in the field of medical image analysis []. The CNN consists of multiple layers, usually including convolutional, pooling and fully connected layers. The pixels in an image are aggregated and transformed by clustering through the convolutional layer to automatically extract high-level features. The deep convolutional neural network (DCNN) uses more convolutional layers and a larger parameter space to fit large-scale datasets. U-net uses full convolutional layers and image enhancement to obtain good accuracy with limited datasets. RNN is particularly unique in processing time series data. Different DL algorithms have different characteristics and application scenarios.
3. Screening of Studies
We performed a search using the following query: (“artificial intelligence” OR “machine learning” OR “deep learning”) AND (“nasopharyngeal carcinoma” OR “nasopharyngeal cancer”). Using the search phrase, a search of research articles from the past 15 years to March 2023 was performed on Springer, Google Scholar, PubMed and Embase. Because there are no indicators or validation protocols of consensus for the evaluation of each model’s performance, a holistic profile of this field was provided instead of a meta-analysis. From this perspective, loose inclusion and exclusion criteria were set (Table 1). Finally, a total of 76 studies were included after following the inclusion and exclusion criteria.
Table 1.
Inclusion and exclusion criteria of the study.
Only studies using AI techniques in NPC were selected. Table 1 shows the exclusion and inclusion criteria which were applied to papers based on the purpose of our review.
4. Applications of AI to NPC
In the Lancet, a train of reviews entitled “Nasopharyngeal carcinoma” is published every few years [,,,]. In recent years, medical AI has been gaining popularity in the research of NPC. Many researchers have devoted themselves to NPC prediction of tumor detection, prognosis and efficacy of radiotherapy and chemotherapy (Figure 1).
Figure 1.
The application of AI in NPC diagnosis and treatment.
4.1. AI and NPC Diagnosis
The diagnosis of NPC is a prerequisite for appropriate treatment, which can be divided into qualitative and staging diagnoses. Currently, qualitative diagnosis of NPC is dominated by the collection of biopsy tissue during endoscopy for pathological examination. Staging diagnosis mainly depends on imaging examinations, such as CT, MRI and PET-CT.
The fiberoptic nasopharyngoscope is a fiberoptic device that can magnify suspicious lesions up to thousands of times through the microscope’s visualization technique. The surgeon can use their own surgical forceps to biopsy the suspicious lesion tissue. The biopsy tissue is then selected and made into paraffin sections for histological examination under the microscope, with the help of electron microscopy or immunohistochemistry if necessary. CT scans a certain thickness of the human body with an X-ray beam, and the detector receives the X-rays passing through that layer. The converter converts the X-rays into digital signals, and the computer uses the digital signals to generate images. MRI uses the principle of nuclear magnetic resonance to detect the electromagnetic waves emitted by an applied gradient magnetic field. The magnetic field is based on the attenuation of the energy released in different structural environments within a substance, and can be used to map the internal structure of an object. PET-CT selectively reflects the metabolism of tissues and organs based on tracers, and the physiological, pathological, biochemical and metabolic changes of human tissues at the molecular level. At the same time, CT images are corrected for full energy attenuation of nuclear medicine images. Thus, the nuclear medicine images are able to completely achieve quantitative purposes and highly improve the accuracy of diagnosis, which realizes the complementary information of functional images and anatomical images.
It is difficult to perform accurate tumor diagnosis owing to the complexity of tumor symptoms and individual differences. AI technologies can help clinicians reduce their workload and improve the readability of imaging images, which leads to the improvement of accuracy and efficiency in diagnosing.
4.1.1. AI Application in Nasopharyngoscopy
Nasopharyngoscopy allows direct observation of lesions on the nasopharyngeal wall, and physicians can analyze and screen lesion images to determine whether the lesions are associated with NPC. NPC diagnosis is currently done by visualizing suspicious tissue sites through using white-light reflectance endoscopy and taking biopsies. In previous studies, researchers developed different AI models using nasopharyngeal endoscopic images to distinguish NPC from nasopharyngeal benign hyperplasia. The studies showed that detection of NPC was not significantly different [] or even performed better than that of radiologists []. In 2018, Mohammed et al. had three studies focusing on the detection of NPC using neural networks based on nasopharyngeal endoscopic images [,,]. In all three studies, they used different neural network models and all achieved very good accuracy, sensitivity and specificity. Using 27,536 white-light imaging nasopharyngoscopy images, Li et al. developed a DL model for detecting NPC, reporting an accuracy of 88.7% and 88.0% on retrospective and prospective test sets, respectively [].
However, conventional white-light endoscopy tends to miss superficial mucosal lesions. For this, Xu et al. designed and trained a Siamese DCNN, which can use white light and narrowband imaging images to enhance the performance of classification for the identification of NPC and non-carcinoma. They collected 4783 nasopharyngoscopy images for DL and validated the predictive power of the model for nasopharyngoscopy results. The overall accuracy and sensitivity of the model were 95.7% and 97.0% according to the prediction level of the patients [].
Furthermore, the identification of normal tissues and treated NPC is a clinical challenge. For this reason, researchers developed a DL-based platform for fiber-optic Raman diagnostics. This platform utilizes multi-layer Raman-specific CNN. The optimized model can distinguish NPC from control and post-treatment patients with 82.09% diagnostic accuracy. The research team took a closer look at the saliency map of the best model. This map reveals specific Raman signatures associated with cancer-associated biomolecular variations [].
4.1.2. AI Application in Pathological Biopsy
A pathological biopsy in diagnosing NPC is required but remains challenging because of the non-keratinized carcinomas with little differentiation and many admixed lymphocytes in most samples. However, the diagnostic results of biopsy samples are often subjectively assessed by pathologists, which can lead to differences between observers. Diagnosing NPC by pathologists is ineffective and usually causes inconsistency in the results. Biopsy samples can be automatically classified and diagnosed by using AI techniques, which can improve diagnostic accuracy and efficiency, and reduce costs. The researchers trained and validated a DL model using 726 NPC biopsy specimens, reporting 0.9900 and 0.9848 areas under receiver operator characteristic curves (AUCs) at patch level and slide level, respectively []. Other researchers have also developed similar DL-dependent automated pathology diagnosis models. The model is based on the validation dataset and achieves an AUC of 0.869 for NPC diagnosis []. The outcomes indicate that the DL algorithm can recognize NPC and help pathologists improve their efficiency and accuracy.
In conclusion, AI plays an important role in recognizing and processing images, and in tissue segmentation in NPC (Table 2). While some applications of AI have yet to be fully realized, its potential in assisting NPC diagnosis is unquestionable.
Table 2.
Summary of AI models for NPC diagnosis session.
4.2. AI and NPC Therapy
Major treatments for NPC include radiotherapy, chemotherapy and other integrated approaches. The application of AI techniques in NPC treatment can help clinicians design more personalized and accurate treatment plans for patients. The prediction of chemotherapy response and the precision of the radiotherapy process are usually combined with AI techniques in NPC therapy.
4.2.1. AI Application in NPC Chemotherapy
Chemotherapy combined with radiotherapy is a great improvement in treating advanced NPC. Accurate pre-chemotherapy assessment can help NPC patients choose personalized treatment and improve their prognosis. In 2020, a research group developed a radiological map that integrates clinical data with radiomic features to predict the response and survival of NPC patients who received induced chemotherapy (IC). Based on survival analysis, IC responders had a significant advantage over non-responders in terms of progression-free survival []. In a study by Yang et al., CT texture analysis was used as a basis for developing a DL model to identify responders and non-responders to NPC IC. They extracted the DL features of the pre-trained CNN by a transfer learning method, and established the best performance model ResNet50 by SVM classification. The model demonstrated an AUC of 0.811 []. These models could be used to predict the treatment response to IC in locally advanced NPC, and might be a practical tool in deciding treatment strategies.
A pre-trained network is a saved CNN that has been previously trained on a large dataset. The original dataset is large and general enough that the spatial hierarchy learned by the pre-trained network can be used as an effective model for extracting features from the visual world. Even if the new problem and task are different from the original task, the learned features are portable between problems, which is an important advantage of DL. It makes DL very effective for small data problems.
To assess the effectiveness of DL on PET-CT-based radiomics for individual IC in advanced NPC, Peng et al. created radiomic signatures and nomograms. Based on a nomogram imaging analysis, high-risk and low-risk patients were divided into two groups, with high-risk patients benefiting from IC and low-risk patients not. Using it as a management tool for advanced NPC in the future would be a novel and helpful innovation [].
4.2.2. AI Application in NPC Radiotherapy
Radiotherapy is an indispensable treatment for NPC, in which tumor target segmentation and dose calculation are particularly critical. However, the overall radiotherapy planning process is always affected by image quality and the heavy workload of contouring tumor targets. Researchers have applied AI to radiotherapy planning to address these issues.
Image quality is fundamental to the whole of radiotherapy planning. However, high-quality CT images are usually not available owing to machine limitations and avoidance of human radiation during radiotherapy. AI can be used to enhance image quality. Tomotherapy uses megavoltage CT to verify the set-up and adapt radiotherapy, but its high noise and low contrast make the images inferior. In a study by Chen et al., synthetic kilovoltage CT was generated by using a DL approach. In the phantom study, synthetic kilovoltage CT showed significantly higher signal-to-noise ratio, image homogeneity and contrast ratio than megavoltage CT []. Li et al. used DCNN to generate synthetic CT images based on cone-beam CT and applied the images to dose calculation for NPC []. Similarly, Wang et al. applied DCNN to produce CT images based on T2-weighted MRI. Compared with real CT, most of the soft tissue and bone areas can be accurately reconstructed with synthetic CT []. Researchers developed an advanced DCNN architecture to generate synthetic CT images from MRI for intensity-modulated proton therapy treatment planning for NPC patients. The (3 mm/3%) gamma passing rates were above 97.32% for all synthetic CT images []. Through these methods, the image quality can be enhanced, which is conducive to tumor segmentation and dose calculation.
In addition, unimodal images are usually unable to provide enough information to accurately depict the tumor target region. As complementary information is provided by multiple form images, better radiotherapy treatment plans can be developed. In 2011, one study constructed a method utilizing weighted CT-MRI registration images for NPC delineation, called “SNAKE” []. Ma et al. developed a multi-modal segmentation structure using CNN, which is composed of multi-modal CNN and combined CNN for automatic NPC segmentation of CT and MR images []. Chen et al. developed a novel multi-modal MRI fusion network to accurately segment NPC []. Zhao et al. presented a method for automatically segmenting NPCs on dual-modality PET-CT images based on completely convolutional networks with auxiliary paths [].
In current clinical practice, targets and organs-at-risk (OARs) are normally delineated manually by clinicians on CT images, which is tedious and time consuming. To address these issues, many automatic segmentation methods have been proposed by researchers. In one study, researchers proposed an adaptive thresholding technique based on self-organizing maps for semiautomated segmentation of NPC []. In addition, the team developed techniques based on region growing for segmentation of CT images for identifying NPC regions [,]. Bai et al. proposed an NPC-Seg DL algorithm for NPC segmentation using a location segmentation framework. In this study, the proposed algorithm was evaluated online on the StructseG-NPC dataset, and a 61.81% average dice similarity coefficient (DSC) was obtained on the test dataset []. Daoud et al. proposed a CNN model based on DL using a two-stage segmentation strategy to determine the final NPC segmentation by integrating three results obtained from coronal, axial and sagittal images. The study concluded that the DSCs of their proposed system were 0.87, 0.85 and 0.91 in the axial, coronal and sagittal profiles, respectively []. Li et al. created a DL model called U-net for NPC segmentation. After the training of the U-net model, the overall DSC of primary tumor was 74.00% []. In addition, many researchers have developed some improved models based on the U-net model to delineate the target volume of NPC. Through the training of the model, the final model obtained good DSC (0.827–0.84) [,,]. Men et al. constructed an end-to-end deep deconvolutional neural network (DDNN) to segment nasopharyngeal gross tumor volume and clinical target volume. The performance of the DDNN and VGG-16 models are compared. The DSC values of DDNN were 80.9% of nasopharyngeal gross tumor volume and 82.6% of clinical target volume, while the DSC values of VGG-16 were 72.3% and 73.7%, respectively [].
MRI images provide better soft tissue contrast compared with CT images, which facilitates accurate segmentation of the tumor target. There have been many studies on building various algorithms for NPC segmentation on MRI images. NPC contours were determined from MRI images using the nearest neighbor graph model and distance regularized level set evolution [,]. Li et al. utilized CNN to create an automatic NPC segmentation model based on enhanced MRI, and the trained model obtained a DSC of 0.89 []. Lin et al. built a 3D CNN architecture based on VoxResNet to automatically draw primary gross tumor volume profiles. In this study, 1021 NPCs were included and the trained model achieved a DSC of 0.79 []. Researchers developed a 3D CNN with long-range jump connections and multi-scale feature pyramids for NPC segmentation. The model has been trained and achieved a DSC of 0.737 in the tests []. Ye et al. successfully developed a fully automatic NPC segmentation method using dense connectivity embedding U-net and dual-sequence MRI images, with an average DSC of 0.87 in seven external subjects with NPC []. Luo et al. proposed the augmentation-invariant Strategy and combined it with the DL model. The final experimental results show that the augmentation-invariant Strategy is superior to the widely used nnU-net, which can perform highly accurate gross tumor volume segmentation on MRI for NPC [].
NPC is highly malignant and invasive. Therefore, it is difficult to distinguish the boundaries between tumor tissue and normal tissue in a complex MRI context. In order to solve this background problem, researchers developed a coarse-to-fine deep neural network. The model firstly predicts the coarse mask based on the well-designed segmentation module, and then the boundary rendering module, which uses the semantic information from different feature mapping layers to refine the boundary of the coarse mask. The dataset encompassed 2000 MRI sections from 596 patients, and the model had a DSC of 0.703 [].
CNN shows promising prospects for cancer segmentation on contrast-enhanced MRI, but some patients are not suitable for the use of contrast media. To address this issue, Wong et al. used U-net to delineate the primary NPC on non-contrast augmented MRI and compared it to the contrast-enhanced MRI. U-net showed similar performance (DSC = 0.71) of fat suppressant (FS)-T2W as enhanced -T1W, and CNN showed promise in depicting NPCs on FS-T2W images when contrast injection was desired [].
Automated and precise segmentation of OAR can lead to more precise radiotherapy planning and reduce the risk of radioactive side effects. Researchers created a risk organ detection and segmentation network based on DL, and the DSCs of high-risk organ segmentation on CT images ranged from 0.689 to 0.934 []. Zhong et al. proposed a cascade network structure combining DL and the Boosting algorithm for segmentation of the organs-at-risk involving parotid gland, thyroid gland and optic nerve, with corresponding DSCs of 0.92, 0.92 and 0.89, respectively []. Peng et al. designed OrganNet, an improved full convolutional neural network for automatic segmentation of OARs, with an average DSC of 83.75% []. Zhao et al. designed an AU-net model based on 3D U-net to automatically segment the OARs of NPC and obtained a mean DSC value of 0.86 ± 0.02 [].
The determination of radiotherapy dose also plays an important role in radiotherapy planning. Researchers developed a gated recurrent unit-based RNN model based on dosimetric information to predict treatment plans for NPC. An improved method is proposed to further improve the dose-volume histogram (DVH) prediction precision and the feasibility of this method for small sample patient data []. It is shown that the regenerated experimental plans (EPs) guided by the gated recurrent unit-based RNN prediction model achieve good agreement with the clinical plans (CPs). EPs save better doses for many OARs while still meeting acceptable criteria for planning tumor volume (PTV) [,]. Yue et al. developed a DL method for dose prediction of radiotherapy for NPC based on distance information and mask information. The predicted dose error and DVH error of the method were 7.51% and 11.6% lower, respectively, than those of the mask-based method []. Sun et al. developed a DL network based on U-net to predict the dose distribution of patients based on the anatomical structure information of patients. A total of 117 NPC cases were included in this study, which showed better organ retention and suboptimal planning target volume coverage using the voxel strategy []. Jiao et al. developed a generalized regression neural network using geometric and dosimetric information to predict OAR DVHs. The results showed that the R2 value increased by ~6.7% and the mean absolute error value decreased by ~46.7% after adding the dosimetric information to the DVH prediction []. Similarly, Chen et al. designed a CNN -based network based on a DL approach to directly predict the DVHs of OARs. The predicted differences between D2% and D50 can be controlled to within 2.32 and 0.69 Gy [].
Some patients with NPC will develop complications after radiotherapy, which can affect the quality of life and lifespan. However, early diagnosis of the complications is a challenge. AI can be applied to the initial prediction of possible complications after NPC radiotherapy. Previous research used the random forest model to construct a radiological model for the early detection of radiation-induced temporal lobe injury (RTLI). In this model, RTLI can be dynamically predicted in advance, allowing early detection and the possibility of taking preventive measures to limit its progression []. Similarly, Bin et al. extracted radiological features from MRI and built a ML model to generate features. A nomogram integrating clinical factors was used to predict RTLI within 5 years after radiotherapy in patients with T4/N0-3/M0 NPC. The C-index of the validation cohort was 0.82 []. Ren et al. developed a prediction model based on a ML algorithm with dosimetric features. The model outperforms conventional dose-volume factors in predicting possible radiation-induced hypothyroidism in NPC patients receiving radiotherapy early and taking preventive measures for NPC patients. For prediction performance, the dosiomics-based prediction model showed better results at the optimal AUC value of 0.7, while the dose-volume factor-based prediction model showed better results at 0.61 []. To predict radiation-induced xerostomia, Chao et al. developed a clustering model that included inhomogeneous dose distributions within the parotid gland. The team combined clustering models with ML techniques to provide a promising tool for predicting xerostomia in head-and-neck-cancer patients [].
4.2.3. AI Application in the Personalized and Precise Treatment of NPC
Personalized and precise cancer treatment has become a major topic in NPC. Patients with locally advanced NPC can choose concurrent chemotherapy (CCRT) or IC plus CCRT as treatment options. However, their choice remains ambiguous. A DL-based NPC treatment decision model developed by researchers can predict the prognosis of patients with T3N1M0 NPC under different therapy regimens and recommend the optimized therapy accordingly. It is expected to be a potential tool to promote the individualized treatment of NPC []. The ability to discriminate between the different risks associated with NPC relapse in patients and to tailor individual treatment has become increasingly important. An AI model designed by researchers can divide relapse patients into different risk groups, which has great guidance potential for personalized treatment []. Targeted therapy is also important in treating NPC patients. Researchers developed a mathematical algorithm using SVM to predict the prognosis of NPC with advanced localization. The algorithm integrated the expression levels of multiple tissue molecular biomarkers representing tumor-genesis signaling pathways and serological biomarkers associated with EBV. It may guide future targeted therapies targeting related signaling pathways []. Moreover, the application of AI in clinical management is not easy to ignore. Previous research developed an automatic ML scoring system based on MRI data, which surpassed the American Joint Committee on Cancer (AJCC) [] TNM system in the prognosis of NPC. Using the new scoring system can help improve counseling and personalized management of patients with NPC and help them achieve better outcomes [].
With the arrival of the big data era, NPC therapy will become more personalized and precise (Table 3). The development of AI can not only effectively relieve clinicians, but also provide more accurate and humane medical services to patients.
Table 3.
Summary of AI applications for NPC treatment sessions.
4.3. AI and NPC Prognosis Prediction
Although great progress has been made in NPC treatment, the long-term prognosis of NPC patients is still unsatisfactory. The traditional TNM/AJCC staging system fails to provide the expected prognostic effect and to predict patient progression. In contrast, AI can accurately predict cancer survival time and progression through processing data and analyzing important features.
MRI images and clinical data are frequently used by researchers to build predictive models for NPC prognosis. Zhong et al. established a radiomic nomogram to predict disease-free survival. In the test cohort, the C-index of radiomic nomogram was 0.788 []. Researchers used SVM to construct radiomic ML models to predict disease progression, the models had good performance [,]. Li et al. combined radiomics and ML to predict the recurrence of NPC after radiotherapy, compared the centralized typical algorithm and the results showed that ANN achieved the best prediction accuracy of 0.812 []. Qiang et al. developed a prognosis model based on 3D DenseNet to predict disease-free survival of patients with non-metastatic NPC. A total of 1636 NPC patients were enrolled in the study. The model divided patients into low- and high-risk groups according to the cut-off value of risk score. The results showed that the model could correctly differentiate the two groups of patients (hazard ratio = 0.62) []. Similarly, Du et al. developed a DCNN model to assess the risk of non-metastatic NPC patients. In the validation set of 3-year disease progression, the AUC of the model was 0.828 []. In addition, several researchers have constructed similar DL models for prognostic prediction and risk stratification of NPC, all of which have good performance [,,]. For NPC patients, survival prediction is of utmost importance. Jing et al. developed an end-to-end multi-modality deep survival network (MDSN) to precisely predict the risk of tumor progression of NPC patients. The model is compared with four traditional popular survival methods. Finally, the established MDSN performs best with a C-index of 0.651 []. Chen et al. used ML to develop a survival model based on tumor burden characteristics and all clinical factors. The study enrolled 1643 patients. The C-indexes were 0.766 and 0.760 in the internal validation and external validation sets [].
PET-CT has particular advantages in sensitivity, specificity and accuracy in NPC recurrence and distant metastases. Meng et al. proposed a model based on pretreatment PET-CT images that can be used both to predict survival and segment advanced NPC. They adopt a hard-sharing segmentation backbone to aid in the extraction of regional attributes associated with the primary tumors and lessen the influence of irrelevant background data. Additionally, they also adopt a cascaded survival network to take the prognostic information from primary tumors and further utilize the tumor data acquired from the segmentation backbone []. Gu et al. developed an end-to-end multi-modal DL-based radiomics model to extract deep features from pre-processed PET-CT images and predict the 5-year progression-free survival. The team also incorporated TNM staging into the model to further improve prognostic power. A total of 257 patients with advanced NPC were enrolled and divided into internal and external cohorts. The AUC of the internal and external cohorts were 0.842 and 0.823, respectively [].
Pathological images can also be used to construct a prognostic model for AI. Researchers integrated MRI-based radiological features and DCNN models based on pathology images and clinical features of NPC patients to construct a multi-scale nomogram to predict failure-free survival of NPC patients. The results showed that the C-index of the internal and external trial cohorts were 0.828 and 0.834, respectively []. In a previous study, the software QuPath (version 0.1.3. Queen’s University) was used to extract pathological microscopic features of NPC patients and the neural network DeepSurv to analyze the pathological microscopic features (DSPMF). In studies, DSPMF has proven to be a reliable prognostic tool and may guide treatment decisions for NPC patients [].
Other researchers have used RNA data to build AI prediction models. In NPC, some miRNAs have prognostic power. Chen et al. combined miRNA expression data from various profiling platforms and constructed a predictive model using 6-miRNAsignatures. According to the functional analysis, the six miRNAs are principally involved in oncogenic signaling pathways, virus infection pathways and B-cell expression []. A metastatic and highly invasive cancer, NPC exhibits different molecular profiles and clinical outcomes in terms of their clinical characteristics. Zhao et al. applied ML techniques to RNA-Seq data from NPC tumor biopsies to identify 13 significant genes between the recurrence/metastasis and non-recurrence/metastasis groups. A 4-mRNA signature was identified using these genes. It shows good predictive value for NPC. A positive prognostic value was found for this signature for NPC. Moreover, the 4-mRNA signature was related to the immune response as well as cell proliferation []. Zhang et al. used the deep network to predict the prognosis of NPC based on MRI and gene expression, and the AUC was 0.88 [].
AI makes it possible to predict outcomes based on diverse factors prior to treatment, which is beneficial for the whole diagnosis and treatment process (Table 4). In the near future, AI techniques will help doctors make rational and personalized medical decisions, including accurate diagnoses, personalized treatment and prognosis assessment for NPC patients.
Table 4.
Summary of AI models for NPC prognosis session.
4.4. Current State-of-the-Art AI Algorithms for NPC Diagnosis and Treatment
AI models require a large number of datasets for training and validation, and we have listed some sample images from various datasets in Figure 2.
Figure 2.
Sample images from various datasets: (a) endoscopic image (Mohammed et al., 2020 []); (b) whole slide image (Chuang et al., 2020 []); (c) CT image (Daoud et al., 2019 []); (d) MRI image (Guo et al., 2020 []); (e) PET image (Zhao et al., 2019 []); (f) CT-MR image (Wang et al., 2019 []); (g) CBCT-CT image (Li et al., 2019 []); (h) DVH image (Zhuang et al., 2021 []).
AI can help doctors with statistics on pathology, physical examination reports, etc. It can analyze and mine patients’ medical data through technologies such as big data and deep mining to automatically identify patients’ clinical variables and indicators. A large part of the medical data comes from medical images, such as CT images, MRI images and PET-CT images. AI can help diagnose and treat diseases by learning a lot from medical images. CNNs have excellent performance in image recognition and image segmentation. In studies on diagnosis [], treatment response prediction [] and prognosis prediction [] of NPC based on various images, researchers have obtained the best performance with improved models based on classical CNNs, usually using AUC and DSC as performance metrics. The FCN-based U-net model also shows very good performance for image segmentation, showing excellent performance in target segmentation [] and dose prediction [].
The distribution of studies based on the best performing algorithms is shown in Figure 3. Many studies have improved on the classical model to create new algorithmic models. Among the AI algorithms, DCNN and CNN perform very well. However, the research results are based on each study independently and are not directly comparable due to the use of different datasets and/or evaluation metrics.
Figure 3.
Artificial intelligence algorithms with the best performance in the papers included in our review.
4.5. Common Training and Testing Methodologies
The performance of AI algorithms is influenced by many factors. We evaluated dataset size, class balance, validation strategy and data processing strategy, all of which have a direct impact on training and testing performance. A summary is given in Table 5.
Table 5.
Summary of Training and Testing Methodologies.
Most of the research papers cited datasets with less than 1000 cases. In addition, only one study addressed and discussed the class balance. AI requires special strategies to manage limited and unbalanced data to reduce the impact on training and testing procedures (e.g., data augmentation techniques). Most studies use validation set and cross validation methods for model validation. The validation set method is the simplest method. It divides the entire data set into a training set and a test set. This method uses only a portion of the data for model training and is suitable for cases where the amount of data is relatively large. Cross validation uses the data repeatedly followed by slicing and dicing of the obtained sample data. We then combine the data into multiple different training and testing sets. This strategy is common in small datasets. The cross validation method will be repeated until each part is used as test data at least once. However, cross validation does not ensure the quality of ML models, as potentially biased or unbalanced data leads to biased evaluations. Some papers failed to describe any validation strategy.
Health data contain many missing values. AI algorithms are unable to handle missing values during data pre-processing, which leads to the deteriorated performance of the algorithms. According to Table 4, excluding cases with incomplete data is the most common strategy. However, this strategy suffers from significant information loss and performs poorly when missing values surpass the entire dataset. Some studies lack a data processing strategy and a detailed description of the management of the missing value cases. AI solutions are trained and tested on private/restricted datasets. These datasets either hold sensitive patient information, or belong to medical institutions that cannot or do not wish to make their data publicly available. Dataset availability improves reproducibility and transparency of research [,]. However, as all research papers used private data, the availability of datasets for AI applications in NPC remains a concern.
5. Current Challenges
Although there is rapid development of AI techniques in the clinical research of NPC, the application of AI remains immature []. Some challenges need to be addressed in order to translate these studies into clinically valuable applications.
As the survival period of NPC is prolonged, more and more patients are suffering from post-radiotherapy radiation brain injury, treatment failure and post-treatment recurrence and metastasis. These patients have complicated conditions and a poor prognosis, which has been causing hardships for treatment. To tackle the above mentioned problems, we need to find the economic, efficient and clinically optimal treatment plan for NPC. Because AI has the advantage of objectively analyzing and processing large amounts of data, AI is supposed to take part in establishing precise treatment ideas, including early screening, precise staging, precise target imaging, optimal treatment of recurrent metastatic NPC and the selection of combination treatment modalities. Prediction models constructed by AI algorithms require a large number of high-quality clinical data to improve their accuracy, sensitivity and specificity, so standardized data annotation and multicenter data sources are needed. Researchers have developed improved algorithms to handle small samples, with less accuracy []. At present, the AI algorithms of NPC are mostly limited to the data of a single medical institution []. It may lead to overfitting of the model, and the model is not fully applicable to a wider range of scenarios. Therefore, external validation is necessary before widespread clinical adaptation of AI applications.
In addition, AI predictions are called “black box” because the selection process and weighting process of AI algorithms are not clear. In other words, interpretability is an important consideration when applying AI to NPC. At present, there are two main solutions to this problem: interpretable models and model-independent interpretation methods []. Both approaches increase computational complexity. Therefore, much work remains to be done to improve the interpretability of the model.
Moreover, much of the research on the utilization of AI in NPC has been designed retrospectively. However, the encouraging results obtained in these studies need to be confirmed by further prospective and multicenter studies owing to possible selection bias in the retrospective study design.
Furthermore, privacy protection and data security are major challenges for AI. Building AI applications for NPC requires a large amount of clinical data from patients, requiring privacy protection and data security. Currently, there are no suitable technical solutions to alleviate this problem while meeting the growing demands of data-driven science []. Establishing a secure and reliable multicenter data sharing platform for the NPC is a possible way.
A common defect of current AI tools is their inability to deal with multi-tasking. No integrated AI system has been developed to detect multiple abnormalities in the human body. Disease and treatment require the use of multiple tools, in which the synergistic union is complicated. Leveraging AI solutions bring many benefits, while their deployment is difficult. For healthcare organizations, efforts are needed to bridge the skills gap by educating staff about AI systems and professional capabilities and building patient trust in AI.
6. Conclusions and Prospect
Literature reviews are broadly categorized as systematic and narrative. Systematic reviews are more rigorous in their methodology and less subject to bias than narrative reviews. However, the aim of this paper is to outline the dynamics of research advances in AI in the diagnosis and treatment of NPC and to present the challenges and future of the field. For this purpose, we have chosen to present a narrative review. To ensure the quality of the studies, we clarify the inclusion and exclusion criteria of the study, integrate and analyze the studies, pay attention to the shortcomings of the studied literature and ensure an objective evaluation attitude to give the reader a quick overview of the objective and comprehensive state of research in this field.
AI has shown great potential for applications in various clinical aspects of NPC, with the explosive growth of clinical data and research progress in ML and DL. The applications of AI to NPC are as follows: (1) understanding cancer at the molecular level through DL; (2) supporting the diagnosis and prognosis of NPC based on images and pathological specimens; (3) to promote personalized, accurate diagnosis and treatment of NPC. As AI techniques continue to advance, AI will have a great impact on the NPC clinical area. We believe that AI will be more closely combined with all aspects of medicine in the near future. We can rely on AI techniques to develop less invasive techniques than nasopharyngoscopy, with diagnostic accuracy close to that of pathological biopsies. We can build AI models based on clinical data to help healthy people understand early warning of NPC. AI will be closely integrated with radiotherapy to develop more personalized radiotherapy plans and conduct more effective whole-process efficacy evaluations. In the future, we can establish a large sample size and cross-population ethnic database to support the prediction of prognosis by AI techniques [], to help researchers find the biggest prognostic factors and establish future prospective prognostic intervention studies.
Funding
This research received no external funding.
Data Availability Statement
Data sharing is not applicable to this article.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Chen, Y.P.; Chan, A.T.C.; Le, Q.T.; Blanchard, P.; Sun, Y.; Ma, J. Nasopharyngeal carcinoma. Lancet 2019, 394, 64–80. [Google Scholar] [CrossRef] [PubMed]
- Bossi, P.; Chan, A.T.; Licitra, L.; Trama, A.; Orlandi, E.; Hui, E.P.; Halámková, J.; Mattheis, S.; Baujat, B.; Hardillo, J.; et al. Nasopharyngeal carcinoma: ESMO-EURACAN Clinical Practice Guidelines for diagnosis, treatment and follow-up. Ann. Oncol. 2021, 32, 452–465. [Google Scholar] [CrossRef] [PubMed]
- Tang, L.L.; Chen, Y.P.; Chen, C.B.; Chen, M.Y.; Chen, N.Y.; Chen, X.Z.; Du, X.J.; Fang, W.F.; Feng, M.; Gao, J.; et al. The Chinese Society of Clinical Oncology (CSCO) clinical guidelines for the diagnosis and treatment of nasopharyngeal carcinoma. Cancer Commun. 2021, 41, 1195–1227. [Google Scholar] [CrossRef]
- Liang, H.; Xiang, Y.Q.; Lv, X.; Xie, C.Q.; Cao, S.M.; Wang, L.; Qian, C.N.; Yang, J.; Ye, Y.F.; Gan, F.; et al. Survival impact of waiting time for radical radiotherapy in nasopharyngeal carcinoma: A large institution-based cohort study from an endemic area. Eur. J. Cancer 2017, 73, 48–60. [Google Scholar] [CrossRef] [PubMed]
- Lee, N.; Harris, J.; Garden, A.S.; Straube, W.; Glisson, B.; Xia, P.; Bosch, W.; Morrison, W.H.; Quivey, J.; Thorstad, W.; et al. Intensity-modulated radiation therapy with or without chemotherapy for nasopharyngeal carcinoma: Radiation therapy oncology group phase II trial 0225. J. Clin. Oncol. 2009, 27, 3684–3690. [Google Scholar] [CrossRef]
- Sun, X.; Su, S.; Chen, C.; Han, F.; Zhao, C.; Xiao, W.; Deng, X.; Huang, S.; Lin, C.; Lu, T. Long-term outcomes of intensity-modulated radiotherapy for 868 patients with nasopharyngeal carcinoma: An analysis of survival and treatment toxicities. Radiother. Oncol. 2014, 110, 398–403. [Google Scholar] [CrossRef]
- Yi, J.L.; Gao, L.; Huang, X.D.; Li, S.Y.; Luo, J.W.; Cai, W.M.; Xiao, J.P.; Xu, G.Z. Nasopharyngeal carcinoma treated by radical radiotherapy alone: Ten-year experience of a single institution. Int. J. Radiat. Oncol. Biol. Phys. 2006, 65, 161–168. [Google Scholar] [CrossRef]
- Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning: From Theory to Algorithms; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
- Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
- Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69s, S36–S40. [Google Scholar] [CrossRef]
- Chen, Z.H.; Lin, L.; Wu, C.F.; Li, C.F.; Xu, R.H.; Sun, Y. Artificial intelligence for assisting cancer diagnosis and treatment in the era of precision medicine. Cancer Commun. 2021, 41, 1100–1115. [Google Scholar] [CrossRef]
- Yang, C.; Jiang, Z.; Cheng, T.; Zhou, R.; Wang, G.; Jing, D.; Bo, L.; Huang, P.; Wang, J.; Zhang, D.; et al. Radiomics for Predicting Response of Neoadjuvant Chemotherapy in Nasopharyngeal Carcinoma: A Systematic Review and Meta-Analysis. Front. Oncol. 2022, 12, 893103. [Google Scholar] [CrossRef]
- Li, S.; Deng, Y.Q.; Zhu, Z.L.; Hua, H.L.; Tao, Z.Z. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics 2021, 11, 1523. [Google Scholar] [CrossRef] [PubMed]
- Ng, W.T.; But, B.; Choi, H.C.W.; de Bree, R.; Lee, A.W.M.; Lee, V.H.F.; López, F.; Mäkitie, A.A.; Rodrigo, J.P.; Saba, N.F.; et al. Application of Artificial Intelligence for Nasopharyngeal Carcinoma Management—A Systematic Review. Cancer Manag. Res. 2022, 14, 339–366. [Google Scholar] [CrossRef] [PubMed]
- Brody, H. Medical imaging. Nature 2013, 502, S81. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Ambinder, E.P. A history of the shift toward full computerization of medicine. J. Oncol. Pract. 2005, 1, 54–56. [Google Scholar] [CrossRef]
- Shen, D.; Wu, G.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio AA, A.; Ciompi, F.; Ghafoorian, M.; van der Laak JA, W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
- Chua, M.L.K.; Wee, J.T.S.; Hui, E.P.; Chan, A.T.C. Nasopharyngeal carcinoma. Lancet 2016, 387, 1012–1024. [Google Scholar] [CrossRef]
- Vokes, E.E.; Liebowitz, D.N.; Weichselbaum, R.R. Nasopharyngeal carcinoma. Lancet 1997, 350, 1087–1091. [Google Scholar] [CrossRef]
- Wei, W.I.; Sham, J.S. Nasopharyngeal carcinoma. Lancet 2005, 365, 2041–2054. [Google Scholar] [CrossRef] [PubMed]
- Wong, L.M.; King, A.D.; Ai, Q.Y.H.; Lam, W.K.J.; Poon, D.M.C.; Ma, B.B.Y.; Chan, K.C.A.; Mo, F.K.F. Convolutional neural network for discriminating nasopharyngeal carcinoma and benign hyperplasia on MRI. Eur. Radiol. 2021, 31, 3856–3863. [Google Scholar] [CrossRef]
- Ke, L.; Deng, Y.; Xia, W.; Qiang, M.; Chen, X.; Liu, K.; Jing, B.; He, C.; Xie, C.; Guo, X.; et al. Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images. Oral Oncol. 2020, 110, 104862. [Google Scholar] [CrossRef] [PubMed]
- Mohammed, M.A.; Abd Ghani, M.K.; Arunkumar, N.; Raed, H.; Mohamad, A.; Mohd, B. A real time computer aided object detection of Nasopharyngeal carcinoma using genetic algorithm and artificial neural network based on Haar feature fear. Future Gener. Comput. Syst. 2018, 89, 539–547. [Google Scholar] [CrossRef]
- Mohammed, M.A.; Abd Ghani, M.K.; Arunkumar, N.; Hamed, R.I.; Mostafa, S.A.; Abdullah, M.K.; Burhanuddin, M.A. Decision support system for Nasopharyngeal carcinoma discrimination from endoscopic images using artificial neural network. J. Supercomput. 2020, 76, 1086–1104. [Google Scholar] [CrossRef]
- Abd Ghani, M.K.; Mohammed, M.A.; Arunkumar, N.; Mostafa, S.; Ibrahim, D.A.; Abdullah, M.K.; Jaber, M.M.; Abdulhay, E.; Ramirez-Gonzalez, G.; Burhanuddin, M.A. Decision-level fusion scheme for Nasopharyngeal carcinoma identification using machine learning techniques. Neu. Comput. Appl. 2020, 32, 625–638. [Google Scholar] [CrossRef]
- Li, C.; Jing, B.; Ke, L.; Li, B.; Xia, W.; He, C.; Qian, C.; Zhao, C.; Mai, H.; Chen, M.; et al. Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies. Cancer Commun. 2018, 38, 59. [Google Scholar] [CrossRef]
- Xu, J.; Wang, J.; Bian, X.; Zhu, J.Q.; Tie, C.W.; Liu, X.; Zhou, Z.; Ni, X.G.; Qian, D. Deep Learning for nasopharyngeal Carcinoma Identification Using Both White Light and Narrow-Band Imaging Endoscopy. Laryngoscope 2022, 132, 999–1007. [Google Scholar] [CrossRef]
- Shu, C.; Yan, H.; Zheng, W.; Lin, K.; James, A.; Selvarajan, S.; Lim, C.M.; Huang, Z. Deep Learning-Guided Fiberoptic Raman Spectroscopy Enables Real-Time In Vivo Diagnosis and Assessment of Nasopharyngeal Carcinoma and Post-treatment Efficacy during Endoscopy. Anal. Chem. 2021, 93, 10898–10906. [Google Scholar] [CrossRef]
- Chuang, W.Y.; Chang, S.H.; Yu, W.H.; Yang, C.K.; Yeh, C.J.; Ueng, S.H.; Liu, Y.J.; Chen, T.D.; Chen, K.H.; Hsieh, Y.Y.; et al. Successful Identification of Nasopharyngeal Carcinoma in Nasopharyngeal Biopsies Using Deep Learning. Cancers 2020, 12, 507. [Google Scholar] [CrossRef]
- Diao, S.; Hou, J.; Yu, H.; Zhao, X.; Sun, Y.; Lambo, R.L.; Xie, Y.; Liu, L.; Qin, W.; Luo, W. Computer-Aided Pathologic Diagnosis of Nasopharyngeal Carcinoma Based on Deep Learning. Am. J. Pathol. 2020, 190, 1691–1700. [Google Scholar] [CrossRef] [PubMed]
- Zhao, L.; Gong, J.; Xi, Y.; Xu, M.; Li, C.; Kang, X.; Yin, Y.; Qin, W.; Yin, H.; Shi, M. MRI-based radiomics nomogram may predict the response to induction chemotherapy and survival in locally advanced nasopharyngeal carcinoma. Eur. Radiol. 2020, 30, 537–546. [Google Scholar] [CrossRef] [PubMed]
- Yang, Y.; Wang, M.; Qiu, K.; Wang, Y.; Ma, X. Computed tomography-based deep-learning prediction of induction chemotherapy treatment response in locally advanced nasopharyngeal carcinoma. Strahlenther. Onkol. 2022, 198, 183–193. [Google Scholar] [CrossRef] [PubMed]
- Peng, H.; Dong, D.; Fang, M.J.; Li, L.; Tang, L.L.; Chen, L.; Li, W.F.; Mao, Y.P.; Fan, W.; Liu, L.Z.; et al. Prognostic Value of Deep Learning PET/CT-Based Radiomics: Potential Role for Future Individual Induction Chemotherapy in Advanced Nasopharyngeal Carcinoma. Clin. Cancer Res. 2019, 25, 4271–4279. [Google Scholar] [CrossRef]
- Chen, X.; Yang, B.; Li, J.; Zhu, J.; Ma, X.; Chen, D.; Hu, Z.; Men, K.; Dai, J. A deep-learning method for generating synthetic kV-CT and improving tumor segmentation for helical tomotherapy of nasopharyngeal carcinoma. Phys. Med. Biol. 2021, 66, 224001. [Google Scholar] [CrossRef]
- Li, Y.; Zhu, J.; Liu, Z.; Teng, J.; Xie, Q.; Zhang, L.; Liu, X.; Shi, J.; Chen, L. A preliminary study of using a deep convolution neural network to generate synthesized CT images based on CBCT for adaptive radiotherapy of nasopharyngeal carcinoma. Phys. Med. Biol. 2019, 64, 145010. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, C.; Zhang, X.; Deng, W. Synthetic CT Generation Based on T2 Weighted MRI of Nasopharyngeal Carcinoma (NPC) Using a Deep Convolutional Neural Network (DCNN). Front. Oncol. 2019, 9, 1333. [Google Scholar] [CrossRef]
- Chen, S.; Peng, Y.; Qin, A.; Liu, Y.; Zhao, C.; Deng, X.; Deraniyagala, R.; Stevens, C.; Ding, X. MR-based synthetic CT image for intensity-modulated proton treatment planning of nasopharyngeal carcinoma patients. Acta Oncol. 2022, 61, 1417–1424. [Google Scholar] [CrossRef]
- Fitton, I.; Cornelissen, S.A.; Duppen, J.C.; Steenbakkers, R.J.; Peeters, S.T.; Hoebers, F.J.; Kaanders, J.H.; Nowak, P.J.; Rasch, C.R.; van Herk, M. Semi-automatic delineation using weighted CT-MRI registered images for radiotherapy of nasopharyngeal cancer. Med. Phys. 2011, 38, 4662–4666. [Google Scholar] [CrossRef]
- Ma, Z.; Zhou, S.; Wu, X.; Zhang, H.; Yan, W.; Sun, S.; Zhou, J. Nasopharyngeal carcinoma segmentation based on enhanced convolutional neural networks using multi-modal metric learning. Phys. Med. Biol. 2019, 64, 025005. [Google Scholar] [CrossRef]
- Chen, H.; Qi, Y.; Yin, Y.; Li, T.; Liu, X.; Li, X.; Gong, G.; Wang, L. MMFNet: A multi-modality MRI fusion network for segmentation of nasopharyngeal carcinoma. Neurocomputing 2020, 394, 27–40. [Google Scholar] [CrossRef]
- Zhao, L.; Lu, Z.; Jiang, J.; Zhou, Y.; Wu, Y.; Feng, Q. Automatic Nasopharyngeal Carcinoma Segmentation Using Fully Convolutional Networks with Auxiliary Paths on Dual-Modality PET-CT Images. J. Digit. Imaging. 2019, 32, 462–470. [Google Scholar] [CrossRef] [PubMed]
- Chanapai, W.; Ritthipravat, P. Adaptive thresholding based on SOM technique for semi-automatic NPC image segmentation. In Proceedings of the 2009 International Conference on Machine Learning and Applications, IEEE, Miami, FL, USA, 13–15 December 2009; pp. 504–508. [Google Scholar]
- Tatanun, C.; Ritthipravat, P.; Bhongmakapat, T.; Tuntiyatorn, L. Automatic segmentation of nasopharyngeal carcinoma from CT images: Region growing based technique. In Proceedings of the 2010 2nd International Conference on Signal Processing Systems, IEEE, Dalian, China, 5–7 July 2010; Volume 2, pp. 537–541. [Google Scholar]
- Chanapai, W.; Bhongmakapat, T.; Tuntiyatorn, L.; Ritthipravat, P. Nasopharyngeal carcinoma segmentation using a region growing technique. Int. J. Comput. Assist. Radiol. Surg. 2012, 7, 413–422. [Google Scholar] [CrossRef] [PubMed]
- Bai, X.; Hu, Y.; Gong, G.; Yin, Y.; Xia, Y. A deep learning approach to segmentation of nasopharyngeal carcinoma using computed tomography. Biomed. Signal Process. Control 2021, 64, 102246. [Google Scholar] [CrossRef]
- Daoud, B.; Morooka, K.; Kurazume, R.; Leila, F.; Mnejja, W.; Daoud, J. 3D segmentation of nasopharyngeal carcinoma from CT images using cascade deep learning. Comput. Med. Imaging Graph. 2019, 77, 101644. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Xiao, J.; He, L.; Peng, X.; Yuan, X. The Tumor Target Segmentation of Nasopharyngeal Cancer in CT Images Based on Deep Learning Methods. Technol. Cancer Res. Treat. 2019, 18, 1533033819884561. [Google Scholar] [CrossRef]
- Xue, X.; Qin, N.; Hao, X.; Shi, J.; Wu, A.; An, H.; Zhang, H.; Wu, A.; Yang, Y. Sequential and Iterative Auto-Segmentation of High-Risk Clinical Target Volume for Radiotherapy of Nasopharyngeal Carcinoma in Planning CT Images. Front. Oncol. 2020, 10, 1134. [Google Scholar] [CrossRef]
- Jin, Z.; Li, X.; Shen, L.; Lang, J.; Li, J.; Wu, J.; Xu, P.; Duan, J. Automatic Primary Gross Tumor Volume Segmentation for Nasopharyngeal Carcinoma using ResSE-UNet. In Proceedings of the 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 28–30 July 2020; pp. 585–590. [Google Scholar]
- Wang, X.; Yang, G.; Zhang, Y.; Zhu, L.; Xue, X.; Zhang, B.; Cai, C.; Jin, H.; Zheng, J.; Wu, J.; et al. Automated delineation of nasopharynx gross tumor volume for nasopharyngeal carcinoma by plain CT combining contrast-enhanced CT using deep learning. J. Radiat. Res. Appl. Sci. 2020, 13, 568–577. [Google Scholar] [CrossRef]
- Men, K.; Chen, X.; Zhang, Y.; Zhang, T.; Dai, J.; Yi, J.; Li, Y. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images. Front. Oncol. 2017, 7, 315. [Google Scholar] [CrossRef]
- Huang, W.; Chan, K.L.; Zhou, J. Region-based nasopharyngeal carcinoma lesion segmentation from MRI using clustering- and classification-based methods with learning. J. Digit. Imaging 2013, 26, 472–482. [Google Scholar] [CrossRef]
- Kai-Wei, H.; Zhe-Yi, Z.; Qian, G.; Juan, Z.; Liu, C.; Ran, Y. Nasopharyngeal carcinoma segmentation via HMRF-EM with maximum entropy. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2968–2972. [Google Scholar] [CrossRef]
- Li, Q.; Xu, Y.; Chen, Z.; Liu, D.; Feng, S.T.; Law, M.; Ye, Y.; Huang, B. Tumor Segmentation in Contrast-Enhanced Magnetic Resonance Imaging for Nasopharyngeal Carcinoma: Deep Learning with Convolutional Neural Network. Biomed. Res. Int. 2018, 2018, 9128527. [Google Scholar] [CrossRef] [PubMed]
- Lin, L.; Dou, Q.; Jin, Y.M.; Zhou, G.Q.; Tang, Y.Q.; Chen, W.L.; Su, B.A.; Liu, F.; Tao, C.J.; Jiang, N.; et al. Deep Learning for Automated Contouring of Primary Tumor Volumes by MRI for Nasopharyngeal Carcinoma. Radiology 2019, 291, 677–686. [Google Scholar] [CrossRef]
- Guo, F.; Shi, C.; Li, X.; Wu, X.; Zhou, J.; Lv, J. Image segmentation of nasopharyngeal carcinoma using 3D CNN with long-range skip connection and multi-scale feature pyramid. Soft Comput. 2020, 24, 12671–12680. [Google Scholar] [CrossRef]
- Ye, Y.; Cai, Z.; Huang, B.; He, Y.; Zeng, P.; Zou, G.; Deng, W.; Chen, H.; Huang, B. Fully-Automated Segmentation of Nasopharyngeal Carcinoma on Dual-Sequence MRI Using Convolutional Neural Networks. Front. Oncol. 2020, 10, 166. [Google Scholar] [CrossRef] [PubMed]
- Luo, X.; Liao, W.; He, Y.; Tang, F.; Wu, M.; Shen, Y.; Huang, H.; Song, T.; Li, K.; Zhang, S.; et al. Deep learning-based accurate delineation of primary gross tumor volume of nasopharyngeal carcinoma on heterogeneous magnetic resonance imaging: A large-scale and multi-center study. Radiother. Oncol. 2023, 180, 109480. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Peng, H.; Dan, T.; Hu, Y.; Tao, G.; Cai, H. Coarse-to-fine Nasopharyngeal Carcinoma Segmentation in MRI via Multi-stage Rendering. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Republic of Korea, 16–19 December 2020; pp. 623–628. [Google Scholar]
- Wong, L.M.; Ai, Q.Y.H.; Mo, F.K.F.; Poon, D.M.C.; King, A.D. Convolutional neural network in nasopharyngeal carcinoma: How good is automatic delineation for primary tumor on a non-contrast-enhanced fat-suppressed T2-weighted MRI? Jpn J. Radiol. 2021, 39, 571–579. [Google Scholar] [CrossRef]
- Liang, S.; Tang, F.; Huang, X.; Yang, K.; Zhong, T.; Hu, R.; Liu, S.; Yuan, X.; Zhang, Y. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur. Radiol. 2019, 29, 1961–1967. [Google Scholar] [CrossRef]
- Zhong, T.; Huang, X.; Tang, F.; Liang, S.; Deng, X.; Zhang, Y. Boosting-based Cascaded Convolutional Neural Networks for the Segmentation of CT Organs-at-risk in Nasopharyngeal Carcinoma. Med. Phys. 2019, 46, 5602–5611. [Google Scholar] [CrossRef]
- Peng, Y.; Liu, Y.; Shen, G.; Chen, Z.; Chen, M.; Miao, J.; Zhao, C.; Deng, J.; Qi, Z.; Deng, X. Improved accuracy of auto-segmentation of organs at risk in radiotherapy planning for nasopharyngeal carcinoma based on fully convolutional neural network deep learning. Oral. Oncol. 2023, 136, 106261. [Google Scholar] [CrossRef]
- Zhao, W.; Zhang, D.; Mao, X. Application of Artificial Intelligence in Radiotherapy of Nasopharyngeal Carcinoma with Magnetic Resonance Imaging. J. Health Eng. 2022, 2022, 4132989. [Google Scholar] [CrossRef]
- Zhuang, Y.; Xie, Y.; Wang, L.; Huang, S.; Chen, L.X.; Wang, Y. DVH Prediction for VMAT in NPC with GRU-RNN: An Improved Method by Considering Biological Effects. Biomed. Res. Int. 2021, 2021, 2043830. [Google Scholar] [CrossRef] [PubMed]
- Cao, W.; Zhuang, Y.; Chen, L.; Liu, X. Application of dose-volume histogram prediction in biologically related models for nasopharyngeal carcinomas treatment planning. Radiat. Oncol. 2020, 15, 216. [Google Scholar] [CrossRef] [PubMed]
- Zhuang, Y.; Han, J.; Chen, L.; Liu, X. Dose-volume histogram prediction in volumetric modulated arc therapy for nasopharyngeal carcinomas based on uniform-intensity radiation with equal angle intervals. Phys. Med. Biol. 2019, 64, 23NT03. [Google Scholar] [CrossRef] [PubMed]
- Yue, M.; Xue, X.; Wang, Z.; Lambo, R.L.; Zhao, W.; Xie, Y.; Cai, J.; Qin, W. Dose prediction via distance-guided deep learning: Initial development for nasopharyngeal carcinoma radiotherapy. Radiother. Oncol. 2022, 170, 198–204. [Google Scholar] [CrossRef] [PubMed]
- Sun, Z.; Xia, X.; Fan, J.; Zhao, J.; Zhang, K.; Wang, J.; Hu, W. A hybrid optimization strategy for deliverable intensity-modulated radiotherapy plan generation using deep learning-based dose prediction. Med. Phys. 2022, 49, 1344–1356. [Google Scholar] [CrossRef]
- Jiao, S.X.; Chen, L.X.; Zhu, J.H.; Wang, M.L.; Liu, X.W. Prediction of dose-volume histograms in nasopharyngeal cancer IMRT using geometric and dosimetric information. Phys. Med. Biol. 2019, 64, 23NT04. [Google Scholar] [CrossRef]
- Chen, X.; Men, K.; Zhu, J.; Yang, B.; Li, M.; Liu, Z.; Yan, X.; Yi, J.; Dai, J. DVHnet: A deep learning-based prediction of patient-specific dose volume histograms for radiotherapy planning. Med. Phys. 2021, 48, 2705–2713. [Google Scholar] [CrossRef]
- Zhang, B.; Lian, Z.; Zhong, L.; Zhang, X.; Dong, Y.; Chen, Q.; Zhang, L.; Mo, X.; Huang, W.; Yang, W.; et al. Machine-learning based MRI radiomics models for early detection of radiation-induced brain injury in nasopharyngeal carcinoma. BMC Cancer 2020, 20, 502. [Google Scholar] [CrossRef]
- Bin, X.; Zhu, C.; Tang, Y.; Li, R.; Ding, Q.; Xia, W.; Tang, Y.; Tang, X.; Yao, D.; Tang, A. Nomogram Based on Clinical and Radiomics Data for Predicting Radiation-induced Temporal Lobe Injury in Patients with Non-metastatic Stage T4 Nasopharyngeal Carcinoma. Clin. Oncol. (R Coll. Radiol.) 2022, 34, e482–e492. [Google Scholar] [CrossRef]
- Ren, W.; Liang, B.; Sun, C.; Wu, R.; Men, K.; Xu, Y.; Han, F.; Yi, J.; Qu, Y.; Dai, J. Dosiomics-based prediction of radiation-induced hypothyroidism in nasopharyngeal carcinoma patients. Phys. Med. 2021, 89, 219–225. [Google Scholar] [CrossRef]
- Chao, M.; El Naqa, I.; Bakst, R.L.; Lo, Y.C.; Penagaricano, J.A. Cluster model incorporating heterogeneous dose distribution of partial parotid irradiation for radiotherapy induced xerostomia prediction with machine learning methods. Acta Oncol. 2022, 61, 842–848. [Google Scholar] [CrossRef] [PubMed]
- Zhong, L.; Dong, D.; Fang, X.; Zhang, F.; Zhang, N.; Zhang, L.; Fang, M.; Jiang, W.; Liang, S.; Li, C.; et al. A deep learning-based radiomic nomogram for prognosis and treatment decision in advanced nasopharyngeal carcinoma: A multicentre study. EBioMedicine 2021, 70, 103522. [Google Scholar] [CrossRef] [PubMed]
- Zhao, X.; Liang, Y.J.; Zhang, X.; Wen, D.X.; Fan, W.; Tang, L.Q.; Dong, D.; Tian, J.; Mai, H.Q. Deep learning signatures reveal multiscale intratumor heterogeneity associated with biological functions and survival in recurrent nasopharyngeal carcinoma. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 2972–2982. [Google Scholar] [CrossRef]
- Jiang, R.; You, R.; Pei, X.Q.; Zou, X.; Zhang, M.X.; Wang, T.M.; Sun, R.; Luo, D.H.; Huang, P.Y.; Chen, Q.Y.; et al. Development of a ten-signature classifier using a support vector machine integrated approach to subdivide the M1 stage into M1a and M1b stages of nasopharyngeal carcinoma with synchronous metastases to better predict patients’ survival. Oncotarget 2016, 7, 3645–3657. [Google Scholar] [CrossRef]
- Amin, M.B.; Greene, F.L.; Edge, S.B.; Compton, C.C.; Gershenwald, J.E.; Brookland, R.K.; Meyer, L.; Gress, D.M.; Byrd, D.R.; Winchester, D.P. The Eighth Edition AJCC Cancer Staging Manual: Continuing to build a bridge from a population-based to a more “personalized” approach to cancer staging. CA Cancer J. Clin. 2017, 67, 93–99. [Google Scholar] [CrossRef]
- Cui, C.; Wang, S.; Zhou, J.; Dong, A.; Xie, F.; Li, H.; Liu, L. Machine Learning Analysis of Image Data Based on Detailed MR Image Reports for Nasopharyngeal Carcinoma Prognosis. Biomed. Res. Int. 2020, 2020, 8068913. [Google Scholar] [CrossRef] [PubMed]
- Zhong, L.Z.; Fang, X.L.; Dong, D.; Peng, H.; Fang, M.J.; Huang, C.L.; He, B.X.; Lin, L.; Ma, J.; Tang, L.L.; et al. A deep learning MR-based radiomic nomogram may predict survival for nasopharyngeal carcinoma patients with stage T3N1M0. Radiother. Oncol. 2020, 151, 1–9. [Google Scholar] [CrossRef]
- Zhuo, E.H.; Zhang, W.J.; Li, H.J.; Zhang, G.Y.; Jing, B.Z.; Zhou, J.; Cui, C.Y.; Chen, M.Y.; Sun, Y.; Liu, L.Z.; et al. Radiomics on multi-modalities MR sequences can subtype patients with non-metastatic nasopharyngeal carcinoma (NPC) into distinct survival subgroups. Eur. Radiol. 2019, 29, 5590–5599. [Google Scholar] [CrossRef]
- Du, R.; Lee, V.H.; Yuan, H.; Lam, K.O.; Pang, H.H.; Chen, Y.; Lam, E.Y.; Khong, P.L.; Lee, A.W.; Kwong, D.L.; et al. Radiomics Model to Predict Early Progression of Nonmetastatic Nasopharyngeal Carcinoma after Intensity Modulation Radiation Therapy: A Multicenter Study. Radiol. Artif. Intell. 2019, 1, e180075. [Google Scholar] [CrossRef]
- Li, S.; Wang, K.; Hou, Z.; Yang, J.; Ren, W.; Gao, S.; Meng, F.; Wu, P.; Liu, B.; Liu, J.; et al. Use of Radiomics Combined with Machine Learning Method in the Recurrence Patterns After Intensity-Modulated Radiotherapy for Nasopharyngeal Carcinoma: A Preliminary Study. Front. Oncol. 2018, 8, 648. [Google Scholar] [CrossRef]
- Gonzalez, G.; Ash, S.Y.; Vegas-Sanchez-Ferrero, G.; Onieva Onieva, J.; Rahaghi, F.N.; Ross, J.C.; Diaz, A.; San Jose Estepar, R.; Washko, G.R.; for the COPDGene andECLIPSE Investigators. Disease Staging and Prognosis in Smokers Using Deep Learning in Chest Computed Tomography. Am. J. Respir. Crit. Care Med. 2018, 197, 193–203. [Google Scholar] [CrossRef] [PubMed]
- Du, R.; Cao, P.; Han, L.; Ai, Q.; King, A.D.; Vardhanabhuti, V. Deep convolution neural network model for automatic risk assessment of patients with non-metastatic Nasopharyngeal carcinoma. arXiv 2019, arXiv:1907.11861. [Google Scholar]
- Qiang, M.; Li, C.; Sun, Y.; Sun, Y.; Ke, L.; Xie, C.; Zhang, T.; Zou, Y.; Qiu, W.; Gao, M.; et al. A Prognostic Predictive System Based on Deep Learning for Locoregionally Advanced Nasopharyngeal Carcinoma. J. Natl. Cancer Inst. 2021, 113, 606–615. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Wu, X.; Liu, J.; Zhang, B.; Mo, X.; Chen, Q.; Fang, J.; Wang, F.; Li, M.; Chen, Z.; et al. MRI-Based Deep-Learning Model for Distant Metastasis-Free Survival in Locoregionally Advanced Nasopharyngeal Carcinoma. J. Magn. Reson. Imaging 2021, 53, 167–178. [Google Scholar] [CrossRef] [PubMed]
- Jing, B.; Deng, Y.; Zhang, T.; Hou, D.; Li, B.; Qiang, M.; Liu, K.; Ke, L.; Li, T.; Sun, Y.; et al. Deep learning for risk prediction in patients with nasopharyngeal carcinoma using multi-parametric MRIs. Comput. Methods Programs Biomed. 2020, 197, 105684. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Li, Y.; Li, X.; Cao, X.; Xiang, Y.; Xia, W.; Li, J.; Gao, M.; Sun, Y.; Liu, K.; et al. An interpretable machine learning prognostic system for locoregionally advanced nasopharyngeal carcinoma based on tumor burden features. Oral. Oncol. 2021, 118, 105335. [Google Scholar] [CrossRef]
- Meng, M.; Gu, B.; Bi, L.; Song, S.; Feng, D.D.; Kim, J. DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients with Advanced Nasopharyngeal Carcinoma Using Pretreatment PET/CT. IEEE J. Biomed. Health Inf. 2022, 26, 4497–4507. [Google Scholar] [CrossRef]
- Gu, B.; Meng, M.; Bi, L.; Kim, J.; Feng, D.D.; Song, S. Prediction of 5-year progression-free survival in advanced nasopharyngeal carcinoma with pretreatment PET/CT using multi-modality deep learning-based radiomics. Front. Oncol. 2022, 12, 899351. [Google Scholar] [CrossRef]
- Zhang, F.; Zhong, L.Z.; Zhao, X.; Dong, D.; Yao, J.J.; Wang, S.Y.; Liu, Y.; Zhu, D.; Wang, Y.; Wang, G.J.; et al. A deep-learning-based prognostic nomogram integrating microscopic digital pathology and macroscopic magnetic resonance images in nasopharyngeal carcinoma: A multi-cohort study. Adv. Med. Oncol. 2020, 12, 1758835920971416. [Google Scholar] [CrossRef]
- Liu, K.; Xia, W.; Qiang, M.; Chen, X.; Liu, J.; Guo, X.; Lv, X. Deep learning pathological microscopic features in endemic nasopharyngeal cancer: Prognostic value and protentional role for individual induction chemotherapy. Cancer Med. 2020, 9, 1298–1306. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, Z.; Li, H.; Li, Y. Integrative Analysis Identified a 6-miRNA Prognostic Signature in Nasopharyngeal Carcinoma. Front. Cell Dev. Biol. 2021, 9, 661105. [Google Scholar] [CrossRef] [PubMed]
- Zhao, S.; Dong, X.; Ni, X.; Li, L.; Lu, X.; Zhang, K.; Gao, Y. Exploration of a Novel Prognostic Risk Signature and Its Effect on the Immune Response in Nasopharyngeal Carcinoma. Front. Oncol. 2021, 11, 709931. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Q.; Wu, G.; Yang, Q.; Dai, G.; Li, T.; Chen, P.; Li, J.; Huang, W. Survival rate prediction of nasopharyngeal carcinoma patients based on MRI and gene expression using a deep neural network. Cancer Sci. 2022, 144, 1596–1605. [Google Scholar] [CrossRef]
- Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Appl. Sci. 2021, 11, 5088. [Google Scholar] [CrossRef]
- Pawlik, M.; Hutter, T.; Kocher, D.; Mann, W.; Augsten, N. A Link is not Enough—Reproducibility of Data. Datenbank. Spektrum. 2019, 19, 107–115. [Google Scholar] [CrossRef] [PubMed]
- Hamamoto, R.; Suvarna, K.; Yamada, M.; Kobayashi, K.; Shinkai, N.; Miyake, M.; Takahashi, M.; Jinnai, S.; Shimoyama, R.; Sakai, A.; et al. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers 2020, 12, 3532. [Google Scholar] [CrossRef] [PubMed]
- van de Wiel, M.A.; Neerincx, M.; Buffart, T.E.; Sie, D.; Verheul, H.M. ShrinkBayes: A versatile R-package for analysis of count-based sequencing data in complex study designs. BMC Bioinform. 2014, 15, 116. [Google Scholar] [CrossRef]
- Keyang, C.; Ning, W.; Wenxi, S.; Yongzhao, Z. Research Advances in the Interpretability of Deep Learning. J. Comput. Res. Dev. 2020, 57, 1208–1217. [Google Scholar]
- Lovis, C. Unlocking the Power of Artificial Intelligence and Big Data in Medicine. J. Med. Internet. Res. 2019, 21, e16607. [Google Scholar] [CrossRef]
- Li, J.; Tian, Y.; Zhu, Y.; Zhou, T.; Li, J.; Ding, K.; Li, J. A multicenter random forest model for effective prognosis prediction in collaborative clinical research network. Artif. Intell. Med. 2020, 103, 101814. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).