Next Article in Journal
The JAK-STAT Pathway as a Therapeutic Strategy in Cancer Patients with Immune Checkpoint Inhibitor-Induced Colitis: A Narrative Review
Previous Article in Journal
Preoperative High C-Reactive Protein to Albumin Ratio Predicts Short- and Long-Term Postoperative Outcomes in Elderly Gastric Cancer Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Risk Prediction Models for Oral Cancer: A Systematic Review

Department of Public Health and Primary Care, University of Cambridge, Cambridge CB2 0SR, UK
*
Author to whom correspondence should be addressed.
Cancers 2024, 16(3), 617; https://doi.org/10.3390/cancers16030617
Submission received: 25 November 2023 / Revised: 24 January 2024 / Accepted: 26 January 2024 / Published: 31 January 2024
(This article belongs to the Section Systematic Review or Meta-Analysis in Cancer Research)

Abstract

:

Simple Summary

Oral cancer is among the twenty most common cancers worldwide. Finding and treating this cancer early improves survival rates. Screening the whole population to check for oral cancer is unlikely to be an efficient use of resources; however, screening only individuals at higher risk has been shown to reduce oral cancer deaths and be cost-effective for healthcare services. Mathematical models have previously been developed to identify these high-risk groups; however, it is not known whether any of these would be suitable for use in clinical practice. In this study, we identified and compared previously published models. We found several that had potential, but only two had been tested outside the original study population. We suggest that future research should focus on (a) testing how well the models identify those at high risk within potential screening populations and (b) assessing how the models might be included within the healthcare systems.

Abstract

In the last 30 years, there has been an increasing incidence of oral cancer worldwide. Earlier detection of oral cancer has been shown to improve survival rates. However, given the relatively low prevalence of this disease, population-wide screening is likely to be inefficient. Risk prediction models could be used to target screening to those at highest risk or to select individuals for preventative interventions. This review (a) systematically identified published models that predict the development of oral cancer and are suitable for use in the general population and (b) described and compared the identified models, focusing on their development, including risk factors, performance and applicability to risk-stratified screening. A search was carried out in November 2022 in the Medline, Embase and Cochrane Library databases to identify primary research papers that report the development or validation of models predicting the risk of developing oral cancer (cancers of the oral cavity or oropharynx). The PROBAST tool was used to evaluate the risk of bias in the identified studies and the applicability of the models they describe. The search identified 11,222 articles, of which 14 studies (describing 23 models), satisfied the eligibility criteria of this review. The most commonly included risk factors were age (n = 20), alcohol consumption (n = 18) and smoking (n = 17). Six of the included models incorporated genetic information and three used biomarkers as predictors. Including information on human papillomavirus status was shown to improve model performance; however, this was only included in a small number of models. Most of the identified models (n = 13) showed good or excellent discrimination (AUROC > 0.7). Only fourteen models had been validated and only two of these validations were carried out in populations distinct from the model development population (external validation). Conclusions: Several risk prediction models have been identified that could be used to identify individuals at the highest risk of oral cancer within the context of screening programmes. However, external validation of these models in the target population is required, and, subsequently, an assessment of the feasibility of implementation with a risk-stratified screening programme for oral cancer.

1. Introduction

Oral cavity and lip cancer together accounted for the 16th highest rate of both cancer incidence and mortality globally in 2020, with 377,713 new cases and 177,757 deaths recorded that year [1]. Additionally, the prevalence of oral cancer varies between countries [2,3], with around two-thirds of cases occurring in lower- to middle-income countries [4]. Between 1990 and 2017, the global age-standardised rate of incidence has increased from 4.41 to 4.84 per 100,000 person-years, while the age-standardised rate of mortality (2.4 per 100,000 person-years) and disability-adjusted life years (64.0 per 100,000 person-years) have remained unchanged [5]. The increasing incidence and stable mortality rates suggest an improvement in treatment strategies, potentially partially due to advances in surgical techniques [5,6]. Early detection with appropriate treatments has been shown to improve overall survival rates (90% five-year survival rate in those with oral potentially malignant disorders [OPMD]) who had a follow-up, compared with 56% in patients without known OPMD) as well as lowering the rate of recurrence [7,8,9,10,11,12]. Currently, most patients (over 60%) tend to be diagnosed at later stages of the disease when it has spread [13], highlighting the difficulty of diagnosing this cancer at early stages when it is often asymptomatic, especially in populations where routine dental check-ups are not part of standard healthcare [4,9,14].
The low disease prevalence, estimated to be between 0.12 and 4.12 per 1000 in lower- and middle-income countries, means that population-wide screening is unlikely to be an efficient method to improve early detection rates of oral cancer [15,16,17]. As yet, no country has yet implemented a systematic national population-based screening programme for oral cancer [15]. A recent analysis investigating if oral cancer met the criteria for a national population screening programme in the United Kingdom [18] identified a number of gaps in the current evidence, including a lack of suitable methods to identify people with high-risk lesions. It has been suggested that screening targeted towards high-risk groups may effectively reduce mortality [14,15], and there is evidence that this approach may be more cost-effective [19]. Taiwan has been implementing a screening programme targeting high-risk populations since 2004 [20,21], where individuals with smoking or betel quid chewing habits are invited to screening [21].
A prediction model could be used to assess the risk of developing oral cancer for individuals in a population and identify those at the highest risk to be targeted for screening. The ideal model for use within a risk-stratified screening programme would use risk factors easily obtainable through routine clinical practice. Further, the model would have to be shown to perform well in the target population [22,23]. A recent rapid review [24] identified a number of models that were developed to predict the risk of oral cancer; however, this was not comprehensive and it is unclear if any are suitable for use within a screening programme. This review aims to (a) systematically identify published models that predict the development of oral cancer that are suitable for use in the general population and (b) describe and compare the identified models, considering their development, including risk factors, performance and their applicability within the context of risk-stratified screening.

2. Materials and Methods

We performed a systematic review following an a priori established study protocol (PROSPERO ID: CRD42022316516).

2.1. Search Strategy

We performed a literature search in Medline, Embase and the Cochrane Library, with no language restrictions, from the beginning of the database up to November 2022, to identify studies on the development and/or validation of risk prediction models for oral cancer. A combination of the following subject headings was used: ‘oral cancer’, ‘risk factor/risk assessment/risk’ and ‘prediction/model/score’ (for full search strategy, see Table S1). The reference lists of all included articles were also manually searched to identify other relevant studies.

2.2. Selection of Included Studies

We included studies that fulfilled the following criteria: (a) is a primary research paper in a peer-reviewed journal; (b) provides a measure of risk for the development of primary oral cancer that incorporates two or more risk factors acting at an individual level; (c) is applicable to adults in the general population; (d) reports a quantitative measure of model performance. We excluded studies (a) including only specific groups of the population, for example, long-term tobacco users or patients with premalignant oral lesions, and (b) reporting models that predict disease progression.
There is no consensus on the definition of the umbrella term “oral cancer” [25], either clinically or within research. In this review, we define oral cancer as all cancers affecting the oral cavity or oropharynx. This includes cancers of the lip, oral cavity (upper- and lower-lip mucosa, tongue, gingiva, floor of the mouth, hard palate, buccal mucosa, vestibule and retromolar area) and oropharynx (ICD-10: C00-06, C09, and C10), which all have similar biology, aetiology and several common risk factors, such as smoking and alcohol consumption [26,27]. In the case where the study did not specify the type of oral cancer, we categorised the outcome as oral cavity cancer (OCC). We did not include studies with an outcome of nonspecific forms of head and neck cancer or cancer of the upper aerodigestive tract; these may include (but are not limited to) cancers of the oral cavity.

Screening Process

One reviewer (A.E.) performed the search and deduplicated the identified articles. All retrieved articles were imported into EndNote 20 (Clarivate, London, UK), which was used for citation management and deduplication [28]. Three reviewers (A.E., Z.S.P., H.H.) independently screened 10% of all articles by title and abstract (including pilot screening) in Rayyan (Rayyan Systems, Cambridge, MA, USA) [29]; all disagreements were resolved through group discussion. The remaining 90% of articles were then screened by two reviewers (A.E., Z.S.P.).
The full text was examined if a decision to exclude could not be made based on the title and abstract alone. All full texts were assessed independently by two reviewers (A.E., Z.S.P.). Disagreements between the reviewers were resolved through discussion between the two reviewers or consultation with a third reviewer (H.H., J.A.U.-S.) when a consensus could not be reached.

2.3. Data Extraction

Data extraction of all included studies was carried out independently by two reviewers (A.E., Z.S.P.) using a standardised data extraction form (Table S2). Information about the model development (population and statistical methods), the published model (risk factors included) and the performance of the model in both development and validation (including discrimination and calibration) was extracted from all included studies. The studies were classified using the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) guidelines [30,31] and we assessed both their risk of bias and applicability to a risk-stratified screening programme using the Prediction Model Risk of Bias Assessment Tools (PROBAST) across four domains (population, risk factors, outcomes, and analysis) [32,33]. PROBAST tool is a comprehensive tool, published in 2019 following several rounds of expert consultation, which provides a robust assessment of the risk of bias for each individual risk model and enables the identification of areas where the overall research quality is low.
In cases where multiple models are reported for the same population and outcome (for example, different combinations of risk factors are used within a stepwise selection process) only the model with the best performance was extracted. However, all models were extracted separately for studies that report more than one distinct model: for different subgroups of the population (for example, separate models for men and women), for different outcomes (e.g., oral cancer, oropharyngeal cancer [OPC]) or using different risk factors (for example, comparing performance of a model with and without a biomarker).

3. Results

3.1. Study Selection

After duplicates were removed, the literature search identified 11,222 articles, of which 10,676 were excluded by title and abstract screening. Of the remaining 546 articles, the full-text screening excluded 533 articles. The most common reasons for exclusion were not reporting quantitative performance measures (n = 375) or not predicting the risk of developing oral cancer for individuals (n = 73) (Table S3).
One paper was identified through citation searching and included at the full-text level. The full study selection process is shown in Figure 1. Overall, 14 studies, corresponding to 23 models, are included in the data synthesis [34,35,36,37,38,39,40,41,42,43,44,45,46,47].

3.2. Model Development and Validation

A summary of the 23 models is presented in detail in Table 1 (phenotypic-only models) and Table 2 (models including genetic risk factors).
The most common outcome is OCC (n = 18) [34,35,37,38,40,41,44,45,46,47], which includes all identified models with included genetic information (n = 6) [35,44,45,46,47]. The remaining models were developed for OPC (n = 4) [41,43] or the composite outcome OCC or OPC (n = 2) [42]. Two studies [37,41] developed separate models for men and women.
The majority of risk models were developed or validated in populations from China (n = 9) and the United States (n = 6) (Table S4). All models were developed in case-control studies, except for one [39] that was developed in a cluster randomised controlled trial (RCT)-based cohort study [48]. Most models were developed in populations recruited from hospital settings (n = 12) [34,35,37,38,40,42,44,45,47]. In three studies (corresponding to seven models) the cases were hospital-based and controls were recruited from either the general population (n = 2) [43] or a combination of the hospital- and community-based populations (n = 5) [36,41]. The studies by Cheung et al. [39] and Fritsche et al. [46] developed models using recruited cases and controls from the general population.
Only three models were developed to estimate the absolute risk of developing oral cancer [39,41,43]. Most models (n = 22) were developed using logistic regression, with one developed using survival analysis (Cox proportional hazards) [39].
Fourteen of the models have been internally validated: seven using resampling within the development population [37,39,42,46], four using a random split-sample [41] and one using a nonrandom split-sample [44]. Two models were also validated in an external population [43].

3.3. Risk of Bias in Studies

All models were assessed as having a high overall risk of bias (Figure S1) with the most common issues in the population and analysis domains. In the population domain, 21 out of 23 development studies and 12 out of 14 validation studies were assessed to have a high risk of bias. The most common reason for this was the use of case-control study designs for model development (n = 22). Models developed using study designs at a lower risk of bias, for example, the cohort study by Cheung et al. [39], might have had a lower risk of bias; however, in this case, they did not report clear or objective eligibility criteria for the control participants. In the analysis domain, 16 out of 23 model development studies and 7 out of 14 model validation studies were assessed to have a high risk of bias. This was most commonly due to insufficient reporting of the performance measures of the models (n = 19) and/or the use of univariable analysis (n = 17) to select the included risk factors.
In this review, a low score for concerns about applicability indicates that the model is suitable for predicting the risk of an individual developing oral cancer in the general population. However, all of the models had a high or unclear overall score for concerns regarding applicability. Most concerns were identified in the population domain, where 13 models were given a high rating (due to the use of hospital-based populations) and seven models were given an unclear rating (due to the use of mixed hospital-based and general population).

3.4. Risk Factors

Across all the included studies, 55 non-genetic risk factors were considered for inclusion in a model, of which 48 were included in at least one model and 31 were included in two or more models (Figure 2 and Table 3).
Most of the models included at least one demographic or lifestyle risk factor (n = 21) and most included two or more (n = 18). The most common demographic and lifestyle risk factors were age (n = 20), alcohol consumption (n = 18), smoking (n = 17) and education level (n = 14). Sex was included in nine of the 17 models developed using mixed-sex cohorts. Nine models also included a risk factor for a family history of any cancer [36,37,40,41,42].
Seven models included clinical risk factors, most commonly markers of oral health or reporting of oral habits (n = 6) [36,37,39]. These include tooth loss, recurrent oral ulceration, regular dental visits, denture wearing, oral rinsing habits and OCC screening status (Table 3). One model included human papillomavirus (HPV) status as a variable [43]. Three models included a blood-based biomarker (serum levels of Arsenic, Cerium and Selenium) along with demographic and lifestyle risk factors [35,38,40]; none of these biomarkers are currently used within routine clinical practice.
Six models used genetic risk factors [40,44,45,46,47], all in combination with demographic and lifestyle risk factors, and one [35] additional included a biomarker (Selenium level). The genetic models included between two and 1,119,238 single nucleotide polymorphisms (SNPs) (Table S5). Four genetic models [35,44,45,47] used small numbers of SNPs (2–7 SNPs) previously shown to be associated with biological mechanisms for oral cancer development. For example, Chung et al. (2017) [44] developed genetic risk scores (GRS) for OCC using four SNPs previously shown to be associated with oral cancer in betel quid users. The two models developed by Fritsche et al. [46] derived polygenic risk scores (PRS) using SNPs that had been shown to be associated with oral cancer in genome-wide association studies (GWAS). All six of these genetic models used logistic regression to combine genetic and phenotypic risk factors.

3.5. Model Performance

The most widely used measure of discrimination is the area under the receiver-operating curve (AUROC). The receiver-operating curve is used to plot sensitivity against 1-specificity for a range of cut-off points. An AUROC of 1 indicates that, in the cohort used to test the model, the model always assigns a higher score to any individual who goes on to develop kidney cancer than it assigns to any individual who does not develop kidney cancer. An AUROC of 0.5 indicates the model does not perform better than random chance or flipping a coin. A measure of discrimination, such as the AUROC values or C-statistic, was reported for 18 models (Figure 3, Table 4). Reported discrimination ranged from an AUROC of <0.55 to 0.95, with heterogeneity across the models and between those within each risk factor group (Figure 3). This suggests that the difference in performance was driven more by the variation in the study designs and study populations (Table S4) than by the risk factors themselves. The two models with the highest reported discrimination were the models developed by Tota et al. [43] and Chung et al. [45], and they, respectively, included clinical and genetic risk factors (in addition to demographics and lifestyle risk factors) and had AUROCs of 0.95 in internal validation [43] and 0.91 in the development population, respectively [45]. Two studies reported models from the same population with incrementally increasing risk factors. In two models by Tota et al. [43], including information on HPV status improved the performance both in internal (AUROC 0.86 to 0.95) and external validation (AUROC 0.81 to 0.87). Conversely, in the case-control study by He et al. [40] with the development population of 324 oral cancer patients and 650 disease-free controls, adding a biomarker (Cerium level) to a model containing demographic and lifestyle risk factors did not result in an improvement in discrimination (AUROCs 0.78 and 0.77, respectively).
An odds ratio (OR) between different risk groups was reported as a measure of performance for five of the six models, which included genetic risk factors (Table 4). We note the performance of the model developed by Chung et al. (2017) [44] (a GRS calculated using four SNPs adjusted for age and betel quid chewing status) with an OR of 3.11 (95% CI: 1.21–10.67) for individuals with four alleles. The PRS developed by Fritsche et al. [44] showed limited ability to identify people at the highest risk of developing oral cancer (ORs for the top 1% of the population 1.63 [95% CI: 0.81–3.26] and 1.69 [95% CI: 0.61–4.71] for the two models—each developed in slightly different populations (Table S4). The highest OR per unit score was observed in the model by Chung et al. (2019) [44] for participants with a categorical GRS of 2 (participants having two alleles associated with oral cancer, OR: 6.12 [95% CI: 1.66–22.49]) compared to those with the lowest GRS of 0 as reference.
Measures of model accuracy were reported for three models [38,44,45], all of which were developed in hospital-based populations (Figure S2). The reported sensitivity and specificity ranged from 69.9% to 92.8% and from 60.3% to 91.4%, respectively. Models with the highest sensitivity and specificity were Rao 2016b [42] and Chen 2022 [38]. Only one study by Rao et al. [42] reported positive or negative predictive values (PPV or NPV), with PPVs of 77.3% and 51.4%, respectively, for the internally validated multivariable model (2016a) and risk score (2016b).
A measure of calibration was reported for 15 of the 23 risk models. Nine reported quantified measures, four reported AIC values (these are not directly comparable to models developed in other populations), and six reported calibration graphically. These graphical measures were reported in studies by Chen et al. (2018) [37], corresponding to two development models, and by Lee et al. [41], corresponding to four separate-sex models for OCC and OPC, which showed good calibration in an internal validation except for one model (men, OPC) that overestimated the risk for higher deciles. Only one study, Tota et al. [43], reported model calibration in an external validation, with O/E ratios of 1.08 (compared to 1.01 for the same model in an internal validation).

4. Discussion

4.1. Key Findings

To our knowledge, this is the first systematic review of risk prediction models for oral cancer. We have identified multiple models, which have been developed to predict the risk of individuals in the general population developing oral cancer.
The identified models use a wide range of risk factors, including clinical, genetic and blood-based biomarkers in addition to demographics and lifestyle risk factors. Although the reported discrimination of the models was wide-ranging (AUROCs 0.53–0.95), we identified nine models with AUROC > 0.7 in a validation, including the two models with AUROC > 0.8 in external validations.
The performance of the models is consistent with that found for models predicting the risk of developing other cancers in earlier systematic reviews for kidney cancer [49], breast cancer [50] and colorectal cancer [51]. Similarly to kidney cancer [52], there has been very limited development of PRS for oral cancer, with the only two models identified in this review (both developed by Fritsche et al. [44]) including a PRS. However, the studies by Chung (2017) et al. [44], which incorporate small numbers of SNPs examining the interaction between genetics and smoking behaviour, have shown promising results. The externally validated models, developed by Tota et al. [43], show very promising results, especially when combining HPV status with demographic and lifestyle risk factors.
However, the heterogeneity of the study populations, model development methods and risk factors considered in the studies included in the review, and the general lack of external validations, make direct comparisons between the models or risk factors challenging.

4.2. Model Generalisability

Only two of the identified models (both developed by Tota et al. [43]) have been externally validated [43]. Additionally, we did not identify any studies that modelled the expected impact of the models in a clinical scenario (for example, the proportion of cases a model would be able to identify within a risk-stratified screening programme). We note that, unlike in similar reviews in other cancers, most of the models identified in this review were developed in populations drawn from lower- and middle- income countries. This may reflect the relatively low prevalence of oral cancer in higher-income countries. Although the highest age-standardised incidence of oral cancer is seen in South Asian countries [5,53], some of the models (eight out of 23) we identified were developed and validated in populations from the USA and the UK, where incidence is lower. For example, the models by Tota et al. [43], which were developed and externally validated in North American populations, would require validations in target screening populations before implementation in the South Asian countries. Similarly, the PRS developed by Fritsche et al. [46], restricted their development cohorts to individuals with European ancestry, therefore, their performance (reported discrimination is poor) is likely to be worse in other populations. Those models that were developed in low-income countries with a high incidence of oral cancer, (for example, the models developed by He et al. [40] and Rao et al. [42] in China and India, respectively), use small, hospital-based populations, and generalisability to wider populations has not been tested.

4.3. Availability of Risk Factors

A key consideration for clinical use is the availability of the risk factors required to compute the models [23]. Demographic and simple lifestyle risk factors may be available from clinical records, such as age, sex or smoking. However, models including genetic information or biomarkers require additional resources to collect and process samples. As the incidence of oral cancer is generally highest in middle- and lower-income countries [5], a strong case would need to be made for the use of these additional resources.
The models developed by Lee et al. [41] and Tota et al. [43], which use demographic and simple lifestyle variables, such as smoking status, and perform well in validations, may be relatively easy to implement within clinical settings. However, the collection of data on lifestyle risk factors in electronic health records is typically incomplete and varies significantly between countries and healthcare systems [54], including the documentation of tobacco use [55].
Models that use more detailed information about lifestyle behaviours, such as diet or family history, would require additional data collection (for example, a questionnaire within a routine consultation). This could be a barrier to implementation, given the costs and resources required. In the studies that have been identified in this review, there is no evidence that models containing more lifestyle variables (e.g., Chen 2022 [38], which includes diet) are more predictive of an oral cancer diagnosis than those with a small number of simpler variables of this type (e.g., Tota 2019a [43], which includes smoking and alcohol consumption).
Several clinical risk factors, including indicators of oral health, such as recurrent oral ulceration and regular dental visits, and HPV status (only for OPC), have been shown to be highly associated with oral cancer in previous studies [56]. We identified seven models [36,37,39,42,43] that include clinical risk factors, four of which would require the involvement of a clinician (and for HPV status, a blood test). For the reasons described above, comparison between these models is challenging. However, Tota et al. [43] show that the addition of HPV status improves model performance (from AUROC of 0.81 [2019a] to 0.87 [2019b]), indicating that testing for HPV may be a good use of limited resources when attempting to identify those at highest risk of oral cancer.

4.4. Recommendations

Currently, none of these models identified in this review can be recommended for use within a targeted screening programme. Although we identified several promising models, only one external validation (of two models developed by Tota et al. [43]) was found, and this study was assessed to be at high risk of bias. Future research should focus on determining the performance of these models in the populations where they are intended to be used (e.g., using population cohorts recruited in countries with high oral cancer incidence where screening programmes are being considered) and assessing the feasibility of implementation within the routine clinical practice.
We have shown that there is no evidence that many of the risk factors specific to oral cancer (e.g., indicators of oral health) improve model performance. Therefore, it may be reasonable to assess the identified models that used only demographic and lifestyle risk factors in cohorts that were not specifically recruited for oral cancer research. However, a key risk factor in literature [56] that is shown to improve model performance in this review [43] is HPV status, which may not be routinely available in nonspecific cohorts.

4.5. Strengths and Limitations

The systematic search of multiple electronic databases ensured good coverage of the existing literature for oral cancer risk prediction. However, our search was last updated in November 2022, so any articles published after this date are not included in this review. We further note the overlap with the recently published rapid review of prognostic models for head and neck cancer [24]; some of the models (n = 11) were identified in both reviews. However, we identified 12 additional models, including all six of the models containing genetic risk factors, that were not included in the rapid review (Table S6).
We limited our review to English language papers; previous research has shown that this has minimal impact on the results of systematic review-based meta-analyses [57]. Nevertheless, the identified literature is more likely to overrepresent research carried out in English-speaking countries [58,59]. Ten non-English language articles were excluded (Figure 1, Table S3) in this review. Nearly all were conducted in countries with a high incidence of oral cancer.
The use of the PROBAST assessment tool to assess the risk of bias enables the comparison of the identified studies across a range of different characteristics (including population and outcome), permitting not only an assessment of the quality of individual studies but also a way of identifying areas of weakness in the area as a whole.
The well-established inconsistency in the definition of oral cancer in literature [25] was also seen in this study and makes it challenging to establish direct comparisons between models that were developed for slightly different outcomes. For example, Fritsche et al. [46] included lip, oral cavity and pharynx as a single outcome for one of their models and only tongue cancer for the other. However, the aetiology, epidemiology and presentation of most oral cancers are similar [26,60], although there is a particularly strong association between OPC and HPV [61], and we did not identify any differences in performance between models, for example, developed for oral cavity and oropharyngeal cancers. Uncertainty about outcome definition (and other areas of heterogeneity) could be resolved by carrying out a head-to-head validation of all identified models in a single cohort for the same outcome.

5. Conclusions

This review has identified 23 risk prediction models for oral cancer with reported performance measures. Although two models by Tota et al. [43] were found to perform well (AUROC > 0.8) in external validation, all the identified studies had a high risk of bias and none of the models could be used in targeted screening programmes without further validation. With the heterogeneity of the studies, we, therefore, identify a need for high-quality external validations that compare the performance of existing models and public health modelling to assess the potential application to risk-stratified screening; we suggest prioritising these over further model development. There is currently no evidence that would justify the additional resources needed to include biomarkers or genetics in an assessment of oral cancer risk, although we note that there is limited research in these areas at present and that this may change in the future.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/cancers16030617/s1, Figure S1: Risk of bias of the included studies; Table S1: Search strategy; Figure S2: Reported accuracy of the included models; Table S2: Extraction form; Table S3: List of excluded articles; Table S4: Summary of development and validation studies; Table S5: List of identified SNPs; Table S6: Identified models that were not included in the rapid review publication.

Author Contributions

Conceptualization, H.H., J.A.U.-S. and A.E.; methodology, H.H., J.A.U.-S. and A.E.; formal analysis, H.H., J.A.U.-S., A.E. and Z.S.P.; resources, A.E. and Z.S.P.; data curation, H.H., J.A.U.-S., A.E. and Z.S.P.; writing—original draft preparation, H.H., J.A.U.-S., A.E. and Z.S.P.; writing—review and editing, H.H., J.A.U.-S., A.E. and Z.S.P.; visualisation, A.E.; supervision, H.H. and J.A.U.-S.; project administration, A.E.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

H.H. was funded by an International Alliance for Cancer Early Detection Project Award (ACEDFR3_0620I135PR007) and is now supported by CRUK International Alliance for Cancer Early Detection (ACED) Pathway Award (EDDAPA-2022/100001). J.A.U.-S. is funded by an NIHR Advanced Fellowship (NIHR300861). The views expressed are those of the author(s) and not necessarily those of CRUK, the NIHR or the Department of Health and Social Care.

Data Availability Statement

Data about the systematic review is available upon requests from the corresponding author. Details of the individual models are available in the original studies.

Acknowledgments

We would like to thank Isla Kuhn for her help in developing the search strategy.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA A Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef]
  2. Chapple, I.L.C.; Papapanou, P.N. Risk Assessment in Oral Health: A Concise Guide for Clinical Application; Springer Nature: Berlin/Heidelberg, Germany, 2020; ISBN 978-3-030-38647-4. [Google Scholar]
  3. García-Martín, J.M.; Varela-Centelles, P.; González, M.; Seoane-Romero, J.M.; Seoane, J.; García-Pola, M.J. Epidemiology of Oral Cancer. In Oral Cancer Detection: Novel Strategies and Clinical Impact; Panta, P., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 81–93. ISBN 978-3-319-61255-3. [Google Scholar]
  4. Sankaranarayanan, R.; Ramadas, K.; Amarasinghe, H.; Subramanian, S.; Johnson, N. Oral Cancer: Prevention, Early Detection, and Treatment. In Cancer: Disease Control Priorities, Third Edition (Volume 3); Gelband, H., Jha, P., Sankaranarayanan, R., Horton, S., Eds.; The International Bank for Reconstruction and Development/The World Bank: Washington, DC, USA, 2015; ISBN 978-1-4648-0349-9. [Google Scholar]
  5. Ren, Z.-H.; Hu, C.-Y.; He, H.-R.; Li, Y.-J.; Lyu, J. Global and Regional Burdens of Oral Cancer from 1990 to 2017: Results from the Global Burden of Disease Study. Cancer Commun. 2020, 40, 81–92. [Google Scholar] [CrossRef]
  6. van Dijk, B.A.C.; Brands, M.T.; Geurts, S.M.E.; Merkx, M.A.W.; Roodenburg, J.L.N. Trends in Oral Cavity Cancer Incidence, Mortality, Survival and Treatment in the Netherlands. Int. J. Cancer 2016, 139, 574–583. [Google Scholar] [CrossRef] [PubMed]
  7. Thavarool, S.B.; Muttath, G.; Nayanar, S.; Duraisamy, K.; Bhat, P.; Shringarpure, K.; Nayak, P.; Tripathy, J.P.; Thaddeus, A.; Philip, S.; et al. Improved Survival among Oral Cancer Patients: Findings from a Retrospective Study at a Tertiary Care Cancer Centre in Rural Kerala, India. World J. Surg. Oncol. 2019, 17, 15. [Google Scholar] [CrossRef] [PubMed]
  8. Jäwert, F.; Nyman, J.; Olsson, E.; Adok, C.; Helmersson, M.; Öhman, J. Regular Clinical Follow-up of Oral Potentially Malignant Disorders Results in Improved Survival for Patients Who Develop Oral Cancer. Oral Oncol. 2021, 121, 105469. [Google Scholar] [CrossRef] [PubMed]
  9. Nagao, T.; Warnakulasuriya, S. Screening for Oral Cancer: Future Prospects, Research and Policy Development for Asia. Oral Oncol. 2020, 105, 104632. [Google Scholar] [CrossRef]
  10. Crossman, T.; Warburton, F.; Richards, M.A.; Smith, H.; Ramirez, A.; Forbes, L.J.L. Role of General Practice in the Diagnosis of Oral Cancer. Br. J. Oral Maxillofac. Surg. 2016, 54, 208–212. [Google Scholar] [CrossRef] [PubMed]
  11. Sankaranarayanan, R.; Ramadas, K.; Thara, S.; Muwonge, R.; Thomas, G.; Anju, G.; Mathew, B. Long Term Effect of Visual Screening on Oral Cancer Incidence and Mortality in a Randomized Trial in Kerala, India. Oral Oncol. 2013, 49, 314–321. [Google Scholar] [CrossRef] [PubMed]
  12. Borggreven, P.A.; Aaronson, N.K.; Verdonck-de Leeuw, I.M.; Muller, M.J.; Heiligers, M.L.C.H.; de Bree, R.; Langendijk, J.A.; Leemans, C.R. Quality of Life after Surgical Treatment for Oral and Oropharyngeal Cancer: A Prospective Longitudinal Assessment of Patients Reconstructed by a Microvascular Flap. Oral Oncol. 2007, 43, 1034–1042. [Google Scholar] [CrossRef]
  13. Surveillance Research Program, National Cancer Institute. Cancer Stat Facts: Oral Cavity and Pharynx Cancer. Available online: https://seer.cancer.gov/statfacts/html/oralcav.html (accessed on 19 September 2023).
  14. Warnakulasuriya, S.; Kerr, A.R. Oral Cancer Screening: Past, Present, and Future. J. Dent. Res. 2021, 100, 1313–1320. [Google Scholar] [CrossRef]
  15. Speight, P.M.; Epstein, J.; Kujan, O.; Lingen, M.W.; Nagao, T.; Ranganathan, K.; Vargas, P. Screening for Oral Cancer—A Perspective from the Global Oral Cancer Forum. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2017, 123, 680–687. [Google Scholar] [CrossRef]
  16. Brocklehurst, P.R.; Speight, P.M. Screening for Mouth Cancer: The Pros and Cons of a National Programme. Br. Dent. J. 2018, 225, 815–819. [Google Scholar] [CrossRef]
  17. Shrestha, A.D.; Vedsted, P.; Kallestrup, P.; Neupane, D. Prevalence and Incidence of Oral Cancer in Low- and Middle-Income Countries: A Scoping Review. Eur. J. Cancer Care 2020, 29, e13207. [Google Scholar] [CrossRef]
  18. UK National Screening Committee Criteria for Appraising the Viability, Effectiveness and Appropriateness of a Screening Programme. Available online: https://www.gov.uk/government/publications/evidence-review-criteria-national-screening-programmes/criteria-for-appraising-the-viability-effectiveness-and-appropriateness-of-a-screening-programme (accessed on 9 February 2022).
  19. D’Cruz, A.K.; Vaish, R. Risk-Based Oral Cancer Screening—Lessons to Be Learnt. Nat. Rev. Clin. Oncol. 2021, 18, 471–472. [Google Scholar] [CrossRef]
  20. Hung, L.-C.; Kung, P.-T.; Lung, C.-H.; Tsai, M.-H.; Liu, S.-A.; Chiu, L.-T.; Huang, K.-H.; Tsai, W.-C. Assessment of the Risk of Oral Cancer Incidence in A High-Risk Population and Establishment of A Predictive Model for Oral Cancer Incidence Using A Population-Based Cohort in Taiwan. Int. J. Environ. Res. Public Health 2020, 17, 665. [Google Scholar] [CrossRef] [PubMed]
  21. Chuang, S.-L.; Su, W.W.-Y.; Chen, S.L.-S.; Yen, A.M.-F.; Wang, C.-P.; Fann, J.C.-Y.; Chiu, S.Y.-H.; Lee, Y.-C.; Chiu, H.-M.; Chang, D.-C.; et al. Population-Based Screening Program for Reducing Oral Cancer Mortality in 2,334,299 Taiwanese Cigarette Smokers and/or Betel Quid Chewers. Cancer 2017, 123, 1597–1609. [Google Scholar] [CrossRef] [PubMed]
  22. Park, Y. Predicting Cancer Risk: Practical Considerations in Developing and Validating a Cancer Risk Prediction Model. Curr. Epidemiol. Rep. 2015, 2, 197–204. [Google Scholar] [CrossRef]
  23. Colditz, G.A.; Wei, E.K. Risk Prediction Models: Applications in Cancer Prevention. Curr. Epidemiol. Rep. 2015, 2, 245–250. [Google Scholar] [CrossRef]
  24. Smith, C.D.L.; McMahon, A.D.; Ross, A.; Inman, G.J.; Conway, D.I. Risk Prediction Models for Head and Neck Cancer: A Rapid Review. Laryngoscope Investig. Otolaryngol. 2022, 7, 1893–1908. [Google Scholar] [CrossRef] [PubMed]
  25. Tapia, J.L.; Goldberg, L.J. The Challenges of Defining Oral Cancer: Analysis of an Ontological Approach. Head Neck Pathol. 2011, 5, 376–384. [Google Scholar] [CrossRef] [PubMed]
  26. Ariyawardana, A.; Johnson, N.W. Trends of Lip, Oral Cavity and Oropharyngeal Cancers in Australia 1982–2008: Overall Good News but with Rising Rates in the Oropharynx. BMC Cancer 2013, 13, 333. [Google Scholar] [CrossRef]
  27. ICD-10 Version: 2010. Available online: https://icd.who.int/browse10/2010/en#/ (accessed on 3 July 2022).
  28. Bramer, W.M.; Giustini, D.; de Jonge, G.B.; Holland, L.; Bekhuis, T. De-Duplication of Database Search Results for Systematic Reviews in EndNote. J. Med. Libr. Assoc. 2016, 104, 240–243. [Google Scholar] [CrossRef] [PubMed]
  29. Harrison, H.; Griffin, S.J.; Kuhn, I.; Usher-Smith, J.A. Software Tools to Support Title and Abstract Screening for Systematic Reviews in Healthcare: An Evaluation. BMC Med. Res. Methodol. 2020, 20, 7. [Google Scholar] [CrossRef] [PubMed]
  30. Collins, G.S.; Reitsma, J.B.; Altman, D.G.; Moons, K.G. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): The TRIPOD Statement. BMC Med. 2015, 13, 1. [Google Scholar] [CrossRef]
  31. Collins, G.S.; Reitsma, J.B.; Altman, D.G.; Moons, K.G.M. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): The TRIPOD Statement. Ann. Intern. Med. 2015, 162, 55–63. [Google Scholar] [CrossRef] [PubMed]
  32. Moons, K.G.M.; Wolff, R.F.; Riley, R.D.; Whiting, P.F.; Westwood, M.; Collins, G.S.; Reitsma, J.B.; Kleijnen, J.; Mallett, S. PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration. Ann. Intern. Med. 2019, 170, W1. [Google Scholar] [CrossRef] [PubMed]
  33. Wolff, R.F.; Moons, K.G.M.; Riley, R.D.; Whiting, P.F.; Westwood, M.; Collins, G.S.; Reitsma, J.B.; Kleijnen, J.; Mallett, S. PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies. Ann. Intern. Med. 2019, 170, 51–58. [Google Scholar] [CrossRef] [PubMed]
  34. Antunes, J.L.F.; Toporcov, T.N.; Biazevic, M.G.; Boing, A.F.; Scully, C.; Petti, S. Joint and Independent Effects of Alcohol Drinking and Tobacco Smoking on Oral Cancer: A Large Case-Control Study. PLoS ONE 2013, 8, e68132. [Google Scholar] [CrossRef]
  35. Bao, X.; Yan, L.; Lin, J.; Chen, Q.; Chen, L.; Zhuang, Z.; Wang, R.; Hong, Y.; Qian, J.; Wang, J.; et al. Selenoprotein Genetic Variants May Modify the Association between Serum Selenium and Oral Cancer Risk. Oral Dis. 2020, 26, 1141–1148. [Google Scholar] [CrossRef]
  36. Chen, F.; Yan, L.; Lin, L.; Liu, F.; Qiu, Y.; Wang, J.; Wu, J.; Liu, F.; Huang, J.; Cai, L.; et al. Dietary Score and the Risk of Oral Cancer: A Case-Control Study in Southeast China. Oncotarget 2017, 8, 34610–34616. [Google Scholar] [CrossRef]
  37. Chen, F.; Lin, L.; Yan, L.; Liu, F.; Qiu, Y.; Wang, J.; Hu, Z.; Wu, J.; Bao, X.; Lin, L.; et al. Nomograms and Risk Scores for Predicting the Risk of Oral Cancer in Different Sexes: A Large-Scale Case-Control Study. J. Cancer 2018, 9, 2543–2548. [Google Scholar] [CrossRef]
  38. Chen, Q.; Qiu, Y.; Chen, L.; Lin, J.; Yan, L.J.; Bao, X.D.; Lin, L.S.; Pan, L.Z.; Shi, B.; Zheng, X.Y.; et al. Association between Serum Arsenic and Oral Cancer Risk: A Case-Control Study in Southeast China. Community Dent. Oral Epidemiol. 2022, 50, 83–90. [Google Scholar] [CrossRef]
  39. Cheung, L.C.; Ramadas, K.; Muwonge, R.; Katki, H.A.; Thomas, G.; Graubard, B.I.; Basu, P.; Sankaranarayanan, R.; Somanathan, T.; Chaturvedi, A.K. Risk-Based Selection of Individuals for Oral Cancer Screening. J. Clin. Oncol. 2021, 39, 663–674. [Google Scholar] [CrossRef] [PubMed]
  40. He, B.; Wang, J.; Lin, J.; Chen, J.; Zhuang, Z.; Hong, Y.; Yan, L.; Lin, L.; Shi, B.; Qiu, Y.; et al. Association Between Rare Earth Element Cerium and the Risk of Oral Cancer: A Case-Control Study in Southeast China. Front. Public Health 2021, 9, 647120. [Google Scholar] [CrossRef] [PubMed]
  41. Lee, Y.C.A.; Al-Temimi, M.; Ying, J.; Muscat, J.; Olshan, A.F.; Zevallos, J.P.; Winn, D.M.; Li, G.; Sturgis, E.M.; Morgenstern, H.; et al. Risk Prediction Models for Head and Neck Cancer in the US Population from the INHANCE Consortium. Am. J. Epidemiol. 2020, 189, 330–342. [Google Scholar] [CrossRef]
  42. Krishna Rao, S.; Mejia, G.C.; Logan, R.M.; Kulkarni, M.; Kamath, V.; Fernandes, D.J.; Ray, S.; Roberts-Thomson, K. A Screening Model for Oral Cancer Using Risk Scores: Development and Validation. Community Dent. Oral Epidemiol. 2016, 44, 76–84. [Google Scholar] [CrossRef]
  43. Tota, J.E.; Gillison, M.L.; Katki, H.A.; Kahle, L.; Pickard, R.K.; Xiao, W.; Jiang, B.; Graubard, B.I.; Chaturvedi, A.K. Development and Validation of an Individualized Risk Prediction Model for Oropharynx Cancer in the US Population. Cancer 2019, 125, 4407–4416. [Google Scholar] [CrossRef] [PubMed]
  44. Chung, C.-M.; Lee, C.-H.; Chen, M.-K.; Lee, K.-W.; Lan, C.-C.E.; Kwan, A.-L.; Tsai, M.-H.; Ko, Y.-C. Combined Genetic Biomarkers and Betel Quid Chewing for Identifying High-Risk Group for Oral Cancer Occurrence. Cancer Prev. Res. 2017, 10, 355–362. [Google Scholar] [CrossRef] [PubMed]
  45. Chung, C.M.; Hung, C.C.; Lee, C.H.; Lee, C.P.; Lee, K.W.; Chen, M.K.; Yeh, K.T.; Ko, Y.C. Variants in FAT1 and COL9A1 Genes in Male Population with or without Substance Use to Assess the Risk Factors for Oral Malignancy. PLoS ONE 2019, 14, e0210901. [Google Scholar] [CrossRef] [PubMed]
  46. Fritsche, L.G.; Patil, S.; Beesley, L.J.; VandeHaar, P.; Salvatore, M.; Ma, Y.; Peng, R.B.; Taliun, D.; Zhou, X.; Mukherjee, B. Cancer PRSweb: An Online Repository with Polygenic Risk Scores for Major Cancer Traits and Their Evaluation in Two Independent Biobanks. Am. J. Hum. Genet. 2020, 107, 815–836. [Google Scholar] [CrossRef] [PubMed]
  47. Miao, L.; Wang, L.; Zhu, L.; Du, J.; Zhu, X.; Niu, Y.; Wang, R.; Hu, Z.; Chen, N.; Shen, H.; et al. Association of microRNA Polymorphisms with the Risk of Head and Neck Squamous Cell Carcinoma in a Chinese Population: A Case-Control Study. Chin. J. Cancer 2016, 35, 77. [Google Scholar] [CrossRef] [PubMed]
  48. Sankaranarayanan, R.; Ramadas, K.; Thomas, G.; Muwonge, R.; Thara, S.; Mathew, B.; Rajan, B. Effect of Screening on Oral Cancer Mortality in Kerala, India: A Cluster-Randomised Controlled Trial. Lancet 2005, 365, 1927–1933. [Google Scholar] [CrossRef] [PubMed]
  49. Harrison, H.; Thompson, R.E.; Lin, Z.; Rossi, S.H.; Stewart, G.D.; Griffin, S.J.; Usher-Smith, J.A. Risk Prediction Models for Kidney Cancer: A Systematic Review. Eur. Urol. Focus 2020, 7, 1380–1390. [Google Scholar] [CrossRef] [PubMed]
  50. Kim, G.; Bahl, M. Assessing Risk of Breast Cancer: A Review of Risk Prediction Models. J. Breast Imaging 2021, 3, 144–155. [Google Scholar] [CrossRef] [PubMed]
  51. Usher-Smith, J.A.; Walter, F.M.; Emery, J.D.; Win, A.K.; Griffin, S.J. Risk Prediction Models for Colorectal Cancer: A Systematic Review. Cancer Prev. Res. 2016, 9, 13–26. [Google Scholar] [CrossRef] [PubMed]
  52. Harrison, H.; Li, N.; Saunders, C.L.; Rossi, S.H.; Dennis, J.; Griffin, S.J.; Stewart, G.D.; Usher-Smith, J.A. The Current State of Genetic Risk Models for the Development of Kidney Cancer: A Review and Validation. BJU Int. 2022, 130, 550–561. [Google Scholar] [CrossRef] [PubMed]
  53. Zhang, S.-Z.; Xie, L.; Shang, Z.-J. Burden of Oral Cancer on the 10 Most Populous Countries from 1990 to 2019: Estimates from the Global Burden of Disease Study 2019. Int. J. Environ. Res. Public Health 2022, 19, 875. [Google Scholar] [CrossRef]
  54. Chen, M.; Tan, X.; Padman, R. Social Determinants of Health in Electronic Health Records and Their Impact on Analysis and Risk Prediction: A Systematic Review. J. Am. Med. Inform. Assoc. 2020, 27, 1764–1773. [Google Scholar] [CrossRef]
  55. Chen, E.S.; Carter, E.W.; Sarkar, I.N.; Winden, T.J.; Melton, G.B. Examining the Use, Contents, and Quality of Free-Text Tobacco Use Documentation in the Electronic Health Record. AMIA Annu. Symp. Proc. 2014, 2014, 366–374. [Google Scholar]
  56. Conway, D.I.; Purkayastha, M.; Chestnutt, I.G. The Changing Epidemiology of Oral Cancer: Definitions, Trends, and Risk Factors. Br. Dent. J. 2018, 225, 867–873. [Google Scholar] [CrossRef]
  57. Morrison, A.; Polisena, J.; Husereau, D.; Moulton, K.; Clark, M.; Fiander, M.; Mierzwinski-Urban, M.; Clifford, T.; Hutton, B.; Rabb, D. The Effect of English-Language Restriction on Systematic Review-Based Meta-Analyses: A Systematic Review of Empirical Studies. Int. J. Technol. Assess. Health Care 2012, 28, 138–144. [Google Scholar] [CrossRef] [PubMed]
  58. Boutron, I.; Page, M.J.; Higgins, J.P.T.; Altman, D.G.; Lundh, A.; Hróbjartsson, A. Chapter 7: Considering Bias and Conflicts of Interest among the Included Studies. In Cochrane Handbook for Systematic Reviews of Interventions; Cochrane: Spokane, WA, USA, 2021. [Google Scholar]
  59. Dobrescu, A.; Nussbaumer-Streit, B.; Klerings, I.; Wagner, G.; Persad, E.; Sommer, I.; Herkner, H.; Gartlehner, G. Restricting Evidence Syntheses of Interventions to English-Language Publications Is a Viable Methodological Shortcut for Most Medical Topics: A Systematic Review. J. Clin. Epidemiol. 2021, 137, 209–217. [Google Scholar] [CrossRef] [PubMed]
  60. Warnakulasuriya, S.; Cain, N. Screening for Oral Cancer: Contributing to the Debate. J. Investig. Clin. Dent. 2011, 2, 2–9. [Google Scholar] [CrossRef]
  61. Chaturvedi, A.K. Epidemiology and Clinical Aspects of HPV in Head and Neck Cancers. Head Neck Pathol. 2012, 6 (Suppl. S1), S16–S24. [Google Scholar] [CrossRef]
Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.
Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.
Cancers 16 00617 g001
Figure 2. Considered and included risk factors in each model [34,35,36,37,38,39,40,41,42,43,44,45,46,47].
Figure 2. Considered and included risk factors in each model [34,35,36,37,38,39,40,41,42,43,44,45,46,47].
Cancers 16 00617 g002
Figure 3. The reported area under the receiver operating characteristic curve (AUROC) values for the included models [34,35,36,37,38,39,40,41,42,43,44,45,46,47,48].
Figure 3. The reported area under the receiver operating characteristic curve (AUROC) values for the included models [34,35,36,37,38,39,40,41,42,43,44,45,46,47,48].
Cancers 16 00617 g003
Table 1. Summary of risk prediction models (excluding models incorporating genetic variants).
Table 1. Summary of risk prediction models (excluding models incorporating genetic variants).
First Author, YearCountryOutcome aAgeSexAlcohol ConsumptionSmokingClinical Risk FactorsBiomarkersOther Lifestyle Risk FactorsStudy TypeStudy SettingTripod Level bReported Performance MeasuresOverall Risk of Bias
DevelopmentValidation
Antunes, 2013 [34] BrazilOCC, OPCXXXX CCHospital-based1aR2HighNot applicable
Bao, 2020a [35]ChinaOCCXXXX XXCCHospital-based1aAICHighNot applicable
Chen, 2017 [36]ChinaOCC XXX XCCHospital-based cases and mixed controls1aAUROCHighNot applicable
Chen, 2018a [37]ChinaOCC (male)X XXX XCCHospital-based1bAUROC and calibration plotHighHigh
Chen, 2018b [37]ChinaOCC (female)X X XCCHospital-based1bAUROC and calibration plotHighHigh
Chen, 2022 [38] ChinaOCCXXXX XXCCHospital-based1aSens, SpecHighNot applicable
Cheung, 2021 [39]IndiaOCCXXXXX XCohort study based on a cluster RCTGeneral population1bAUROC, O/E ratioHighHigh
He, 2021a [40]ChinaOCCXXXX XCCHospital-based1aAUROC, AICHighNot applicable
He, 2021b [40]ChinaOCCXXXX XXCCHospital-based1aAUROC, AICHighNot applicable
Lee, 2020a [41]United StatesOCC (male)X XX XCCHospital-based cases and mixed controls2aAUROC without CIHighHigh
Lee, 2020b [41]United StatesOCC (female)X XX CCHospital-based cases and mixed controls2aAUROC without CIHighHigh
Lee, 2020c [41]United StatesOPC (male)X XX CCHospital-based cases and mixed controls2aAUROC without CIHighHigh
Lee, 2020d [41]United StatesOPC (female)X XX CCHospital-based cases and mixed controls2aAUROC without CIHighHigh
Rao, 2016a [42]IndiaOCC, OPCX XXX XCCHospital-based1bAUROC, Sens, Spec, PPV, NPVHighHigh
Rao, 2016b [42]IndiaOCC, OPCX XXX XCCHospital-based1bAUROC, Sens, Spec, PPV, NPVHighHigh
Tota, 2019a [43] United StatesOPCXXXX XCCHospital-based cases and population-based controls3AUROC, O/E ratio in IV and EVHighHigh
Tota, 2019b [43]United StatesOPCXXXXX XCCHospital-based cases and population-based controls3AUROC, O/E ratio in IV and EVHighHigh
Abbreviations: AIC, Akaike Information Criterion; AUROC, area under the receiver operating characteristic curve; CC, case-control; CI, confidence interval; EV, external validation; IV, internal validation; NPV, negative predictive value; OCC, oral cavity cancer; O/E, observed/expected; OPC, oropharyngeal cancer; PPV, positive predictive value; RCT, randomised controlled trial; Sens, sensitivity; Spec, specificity. a Each prediction model is for either a single or combined outcome. b Classification of prediction model according to the TRIPOD guidelines [30,31]: 1a, development only; 1b, development and validation using resampling; 2a, random split-sample development and validation; 3, development and validation using separate data.
Table 2. Summary of risk prediction models incorporating genetic variants.
Table 2. Summary of risk prediction models incorporating genetic variants.
First Author, YearCountryOutcome aGenetic FactorsNon-Genetic Risk FactorsStudy TypeStudy-SettingTripod Level ᵇReported Performance MeasuresOverall Risk of Bias
DevelopmentValidation
Bao, 2020b [35]ChinaOCC7 SNP ᶜ-constructed GRSSelenium levelCCHospital-based1aAIC, ORHighNot applicable
Chung, 2017 [44]TaiwanOCC4 SNPs ᵈAge and betel quid chewingCCHospital-based2bAUROC, Sens, Spec, ORHighHigh
Chung, 2019 [45]TaiwanOCC2 SNPs ᵉAge, betel quid chewing and alcohol consumptionCCHospital-based1aAUROCHighHigh
Fritsche, 2020a [46]United Kingdom
(UK Biobank)
OCC1,119,238 SNPsEHR-derived phenotypesCCGeneral population1bAUROC, R2, Brier score, ORHighHigh
Fritsche, 2020b [46]Finland (FinnGen)OCC ᶠ931,954 SNPsEHR-derived phenotypesCCGeneral population1bAUROC, R2, Brier score, ORHighHigh
Miao, 2016 [47]ChinaOCC3 SNPsAgeCCHospital-based1aBalance accuracyHighNot applicable
Abbreviations: AIC, Akaike Information Criterion; AUROC, area under the receiver operating characteristic curve; CC, case-control; EHR, electronic health record; GRS, genetic risk score; OCC, oral cavity cancer; OR, odds ratio; Sens, sensitivity; SNP, single nucleotide polymorphism; Spec, specificity. a Each prediction model is for either a single- or combined-outcome. b Classification of prediction model according to the TRIPOD guidelines [30,31]: 1a, development only; 1b, development and validation using resampling; 2b, nonrandom split-sample development and validation; c cIncluded SNPs: rs1800668, rs3746165, rs7310505, rs4964287, rs9605030, rs3788317, rs13054371. d Included SNPs: rs2070833, rs550675, rs139994842, rs2822641. e Included SNPs: rs550675, rs28647489. f Tongue cancer.
Table 3. List of the risk factors in the included models.
Table 3. List of the risk factors in the included models.
Risk Factor, CategoryConsideredIncludedRisk Factor, CategoryConsideredIncluded
Demographic and lifestyle   Beans and/or soy products32
 Personal characteristics   Tea consumption22
  Age2021  Spicy foods22
  Education level1414  Poultry/domestic meat31
  Sex139  Milk and dairy products31
  Marital status77  Pickled food31
  Ethnicity96  Processed meat11
  BMI75  Tea concentration20
  Area of residence55  Tea types20
  Occupation *33  Tea temperature20
  Lifetime number of sexual partners *22Tobacco/BQ chewing55
  Age of first intercourse *21  Status of tobacco/BQ chewing44
  Parental education level *20  Duration of tobacco/BQ chewing11
  Socioeconomic condition *20  Intensity of tobacco/BQ chewing11
 Alcohol consumption2118  Past user of tobacco/BQ chewing11
  Alcohol consumption status1210Clinical risk factors
  Intensity of alcohol consumption99 Oral health or oral habits66
  Parental alcohol consumption status20  Teeth loss22
  Duration of alcohol consumption11  Recurrent oral ulceration33
 Smoking2117  Regular dental visit22
  Smoking status/history1512  Denture wearing31
  Intensity of tobacco/cigarette smoking98  Frequency of tooth-brushing20
  Duration of tobacco/cigarette smoking55  Mouth rinsing habit22
  Passive smoking21  Oral cancer screening status11
 Family history97  HPV status11
  Family history of head and neck cancer64Genetic factors66
  Family history of any cancer55  Genetic risk score33
 Diet88Biomarkers33
  Vegetables (leafy and/or other)66  Arsenic level11
  Fish65  Selenium level11
  Seafood65  Cerium level11
  Fruits65Other risk factors43
  Eggs53  EHR-derived phenotype22
  Red meat53  Cooking oil fume exposure21
Abbreviations: BMI, body mass index; BQ, betel quid; HPV, human papillomavirus. * Five personal characteristics risk factors with the least occurrence were categorised as ‘Others’ in Figure 3.
Table 4. Summary details of performance measure.
Table 4. Summary details of performance measure.
First Author, YearDevelopmentValidation a
DiscriminationCalibrationAccuracyOther MeasuresDiscriminationCalibrationAccuracyOther Measures
Models without genetic variants
Antunes, 2013 [34] Pseudo-R2 = 0.186
Bao, 2020a [35] AIC: 542.846
Chen, 2017 [36]AUROC: 0.682 (95% CI: 0.662–0.702)
Chen, 2018a [37]C-index: 0.768 (95% CI: 0.723–0.813)Calibration plot: shows good calibration
Chen, 2018b [37]C-index: 0.700 (95% CI: 0.635–0.765)Calibration plot: shows good calibration
Chen, 2022 [38] Sens: 69.9%
Spec: 91.4%
Cheung, 2021 [39] C-index: 0.84 (95% CI: 0.77–0.90)O/E ratio: 1.08 (95% CI: 0.81–1.44)
He, 2021a [40]AUROC: 0.77 (95% CI: 0.74–0.80)AIC: 1040.50
He, 2021b [40]AUROC: 0.78 (95% CI: 0.75–0.81)AIC: 1033.82
Lee, 2020a [41]AUROC: 0.798 AUROC: 0.752Calibration plot by decile: shows good calibration
Lee, 2020b [41]AUROC: 0.774 AUROC: 0.718Calibration plot by decile: shows good calibration
Lee, 2020c [41]AUROC: 0.701 AUROC: 0.643Calibration plot by decile: shows overestimation in the three highest deciles
Lee, 2020d [41]AUROC: 0.777 AUROC: 0.745Calibration plot by decile: shows good calibration
Rao, 2016a [42]AUROC: 0.870 Sens: 74.6%
Spec: 84.6%
PPV: 76.7%
NPV: 83.0%
AUROC: 0.869 Sens: 74.4%
Spec: 85.1%
PPV: 77.3%
NPV: 83.0%
Rao, 2016b [42]AUROC: 0.866 Sens: 92.8%
Spec: 60.3%
PPV: 60.7%
NPV: 92.7%
AUROC: 0.865 Sens: 96.6%
Spec: 39.3%
PPV: 51.4%
NPV: 94.6%
Tota, 2019a [43] Internal validation:
AUROC: 0.86 (95% CI: 0.84–0.89)
External validation:
AUROC: 0.81 (95% CI: 0.77–0.86)
Internal validation:
Overall O/E ratio: 1.01 (95% CI: 0.70–1.32)
External validation:
O/E ratio: 1.08 (95% CI: 0.77–1.39)
Tota, 2019b [43] Internal validation:
AUROC: 0.95 (95% CI: 0.92–0.97)
External validation:
AUROC: 0.87 (95% CI: 0.84–0.90)
Internal validation:
O/E ratio: 1.01
(95% CI: 0.70–1.32)
External validation:
O/E ratio: 1.08
(95% CI: 0.77–1.39)
Models incorporating genetic variants
Bao, 2020b [35] AIC: 504.162 Adjusted OR for GRS: 0: Reference
1: 1.908 (95% CI: 1.086–3.352)
2: 1.940 (95% CI: 1.055–3.567)
Chung, 2017 [44]AUROC: 0.89 (95% CI: 0.86–0.91) Sens: 86.7%
Spec: 86%
Adjusted OR for GRS:
0: Reference
1: 0.96 (95% CI: 0.60–1.54)
2: 1.29 (95% CI: 0.79–2.10)
3: 1.31 (95% CI: 0.60–2.85)
4: 3.11 (95% CI: 1.21–10.67)
AUROC: 0.88 (95% CI: 0.84–0.91) Sens: 86.3%
Spec: 86.5%
Chung, 2019 [44]AUROC: 0.91 Sens: 85.7%
Spec: 85.7%
Adjusted OR for GRS:
0: Reference
1: 1.68 (95% CI: 1.01–2.81)
2: 6.12 (95% CI: 1.66–22.49)
Fritsche, 2020a [46] OR Top 1% vs. other: 1.63 (95% CI: 0.812–3.26)
R2: 0.00207
AUROC: 0.528 (95% CI: 0.502–0.552)Brier score: 0.0829
Fritsche, 2020b [46] OR Top 1% vs. other: 1.69 (95% CI: 0.61–4.71)
R2: 0.00325
AUROC: 0.538 (95% CI: 0.501–0.575)Brier score: 0.0827
Miao, 2016 [47] Training balance accuracy: 0.8221
Testing balance accuracy: 0.5491
Abbreviations: AIC, Akaike Information Criterion; AUROC, area under the receiver operating characteristic curve; O/E, observed/expected; OR, odds ratio; Sens, sensitivity; Spec, specificity. ᵃ All reported validation measures are internal validation except for Tota et al. [43], which reported internal and external validation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Espressivo, A.; Pan, Z.S.; Usher-Smith, J.A.; Harrison, H. Risk Prediction Models for Oral Cancer: A Systematic Review. Cancers 2024, 16, 617. https://doi.org/10.3390/cancers16030617

AMA Style

Espressivo A, Pan ZS, Usher-Smith JA, Harrison H. Risk Prediction Models for Oral Cancer: A Systematic Review. Cancers. 2024; 16(3):617. https://doi.org/10.3390/cancers16030617

Chicago/Turabian Style

Espressivo, Aufia, Z. Sienna Pan, Juliet A. Usher-Smith, and Hannah Harrison. 2024. "Risk Prediction Models for Oral Cancer: A Systematic Review" Cancers 16, no. 3: 617. https://doi.org/10.3390/cancers16030617

APA Style

Espressivo, A., Pan, Z. S., Usher-Smith, J. A., & Harrison, H. (2024). Risk Prediction Models for Oral Cancer: A Systematic Review. Cancers, 16(3), 617. https://doi.org/10.3390/cancers16030617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop