Next Article in Journal
The Effect of Systemic Inflammation on Newborns: The Prognostic Value of the Aggregate Systemic Inflammation Index (AISI) and Systemic Inflammatory Response Index (SIRI)
Next Article in Special Issue
CELM: An Ensemble Deep Learning Model for Early Cardiomegaly Diagnosis in Chest Radiography
Previous Article in Journal
Neurodevelopment Genes Encoding Olduvai Domains Link Myalgic Encephalomyelitis to Neuropsychiatric Disorders
Previous Article in Special Issue
The Role of ChatGPT in Dermatology Diagnostics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Machine and Deep Learning for the Diagnosis, Prognosis, and Treatment of Cervical Cancer: A Scoping Review

by
Blanca Vazquez
1,*,†,
Mariano Rojas-García
2,
Jocelyn Isabel Rodríguez-Esquivel
3,
Janeth Marquez-Acosta
4,
Carlos E. Aranda-Flores
5,
Lucely del Carmen Cetina-Pérez
6,
Susana Soto-López
7,
Jesús A. Estévez-García
8,
Margarita Bahena-Román
3,
Vicente Madrid-Marina
3 and
Kirvis Torres-Poveda
3,9,†
1
Unidad Académica del Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas del Estado de Yucatán, Universidad Nacional Autónoma de México (UNAM), Mérida 97357, Mexico
2
Department of Animal Sciences, Facultad de Estudios Superiores Cuautitlán, Universidad Nacional Autónoma de México (UNAM), Cuautitlán Izcalli 54714, Mexico
3
Center for Research on Infectious Diseases, Instituto Nacional de Salud Pública (INSP), Cuernavaca 62100, Mexico
4
Colposcopy Department, Luis Catelazo Ayala Hospital, Instituto Mexicano del Seguro Social (IMSS), Mexico City 01090, Mexico
5
Oncology Department, Hospital General de Mexico Eduardo Liceaga, Mexico City 06720, Mexico
6
Department of Clinical Research and Medical Oncology, Instituto Nacional de Cancerología (INCAN), Mexico City 14080, Mexico
7
Femme Vite Center, Cuernavaca 62374, Mexico
8
Center for Population Health Research, Instituto Nacional de Salud Pública (INSP), Cuernavaca 62100, Mexico
9
Secretaría de Ciencia, Humanidades, Tecnología e Innovación (SECIHTI), Mexico City 03940, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2025, 15(12), 1543; https://doi.org/10.3390/diagnostics15121543
Submission received: 7 February 2025 / Revised: 10 June 2025 / Accepted: 12 June 2025 / Published: 17 June 2025
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)

Abstract

Background/Objectives: Cervical cancer (CC) is the fourth most common cancer among women worldwide. This study explored the use of machine learning (ML) and deep learning (DL) in the prediction, diagnosis, and prognosis of CC. Methods: An electronic search was conducted in the PubMed, IEEE, Web of Science, and Scopus databases from January 2015 to April 2025 using the search terms ML, DL, and uterine cervical neoplasms. A total of 153 studies were selected in this review. A comprehensive summary of the available evidence was compiled. Results: We found that 54.9% of the studies addressed the application of ML and DL in CC for diagnostic purposes, followed by prognosis (22.9%) and an incipient focus on CC treatment (22.2%). The five countries where most ML and DL applications have been generated are China, the United States, India, Republic of Korea, and Japan. Of these studies, 48.4% proposed a DL-based approach, and the most frequent input data used to train the models on CC were images. Conclusions: Although there are results indicating a promising application of these artificial intelligence approaches in oncology clinical practice, further evidence of their validity and reproducibility is required for their use in early detection, prognosis, and therapeutic management of CC.

1. Introduction

Cervical cancer (CC) is the fourth most common cancer among women worldwide, with an age-standardized incidence rate of 14.1 cases per 100,000 women-years, and the third leading cause of mortality due to cancer in women, with a mortality rate of 7.1 deaths per 100,000 women-years [1].
Regions with the highest CC disease burden include sub-Saharan Africa, Latin America, and Asia [2]. CC is primarily caused by a persistent infection of high-risk Human Papilloma Virus (HR-HPV). In 2020, the World Health Assembly established a strategy for the elimination of CC [3], which comprises prevention (90% of girls aged 15 years or younger fully vaccinated against HPV), early detection (70% of women aged 35–45 years screened by molecular methods to detect HR-HPV DNA), and guaranteed treatment of women (90% of women) diagnosed with CC. Many advances have been made to improve the technologies available for the early diagnosis of CC [4], its prognosis [5], and the treatment of precancerous lesions and CC [6]; new technologies have emerged for faster and more accurate diagnosis and improved prognosis prediction.
In this regard, machine learning (ML) and deep learning (DL) are two approaches from artificial intelligence (AI) that have contributed to clinical practice [7,8,9] in the development of tools to support the tasks of diagnosis, prognosis, and treatment of various cancers [10,11,12,13,14,15]. ML is a subfield of artificial intelligence (AI) that focuses on using data and algorithms to give computers the ability to learn without explicitly being programmed [16]. In contrast, DL is a specific subfield of ML that leverages artificial neural networks to mimic the learning process of the human brain [17]. To learn, both ML and DL algorithms require input data that can be images, text, or signals that, through an iterative process, identify patterns in this data and produce the desired result.
ML algorithms can be categorized into supervised and unsupervised learning depending whether the data are labeled or unlabeled [18]. In supervised learning, the input data used during training are labeled; this means that the algorithm models relationships and dependencies between the label and the input features. Moreover, the model output prediction can be compared with the label; if the output is incorrect, this can be corrected during training [19]. In supervised learning, the goal is to learn a mapping f from inputs x X to outputs y Y , where the inputs x are called features or predictors, which is often a fixed-dimensional vector of numbers, such as a person’s age and weight, or the pixels in an image [20]. The output y is called a label or response.
Classification and regression are the most common tasks in supervised learning, while clustering and association are tasks in unsupervised learning. Classification is a type of supervised learning that is used to predict categorical values, for instance, given a set of a patient’s clinical variables, to predict whether they have COVID-19 or not (binary classification) [21] or to categorize a cervical cell image into three classes: normal, low-grade intraepithelial lesion, or high-grade intraepithelial lesion (multiclass classification) [22]. In contrast, regression is used to predict continuous values, such as the price of a car given a set of features or the predicted dose in brachytherapy for CC [23]. The most common algorithms for classification and regression are logistic regression (LR), support vector machine (SVM), random forest (RF), decision trees (DT), and naive Bayes (NB) [9].
In contrast, in unsupervised learning, the input data used are unlabeled; this means that there is no guide that indicates the model’s output. That is, the models just observed a set of inputs x without any corresponding outputs y. The idea of unsupervised learning is to discover patterns, relationships, similarities, and differences in the data without any explicit guidance [20]. Clustering is a type of unsupervised learning that is used to find patterns and split data into different groups with common features, such as grouping cells based on their characteristics to model the degrees of abnormality in CC [24]. The most common algorithms for clustering are principal component analysis (PCA), density-based spatial clustering of applications with noise (DBSCAN), and K-means clustering [25].
On the other hand, neural networks, which are the core of DL, are layers stacked on top of each other, where each layer is composed of a set of neurons, which are computing units that attempt to mimic the behavior of a biological neuron [7,17]. The term ‘neuron’ appeared in 1943, proposed by McCulloch and Pitts when they mathematically formalized a neuron’s behavior and studied its implications for computing and processing information. This mathematical model has inspired the development of hundreds of different models based on neuronal networks. The most common models are convolutional neural networks (CNNs) [26], Transformer [27], and residual networks [28].
In recent years, the amount of research in the clinical area using ML and DL models has increased exponentially because these models can help in clinical decision-making, such as early warning, individualization of treatment, and improvement of progress in clinical trials [8,29,30]. In particular, several works have emerged in the study of cervical neoplasm using ML and DL models due to immense interest in the field [5,31,32]. Therefore, we performed a scoping review [33] to map the body of literature on the use of these models for the diagnosis, prognosis, and treatment of CC in order to provide updated evidence on the application of ML and DL in CC, assess these approaches’ potential scope in oncological clinical practice, and encourage their adoption in clinical decision-making support, especially in contexts where qualified personnel are lacking.

2. Materials and Methods

2.1. Search Strategy

The review’s population, concept, context (PCC) question was, “What uses have machine learning and deep learning had according to the evidence found in the diagnosis, prognosis, and treatment of cervical cancer?”, in which P = cervical cancer, C = proposed use, and C = machine and deep learning. To address this question, we conducted a scoping review following the PRISMA-ScR statement recommendations (Supplementary Table S1) [33]. There is no registered review protocol as this review is a scoping review. A search in the MEDLINE database was performed through the PubMEd database browser, Scopus, Web Of Science, and IEEE, with a text formed by combining the following terms: “Machine Learning”, “Uterine Cervical Neoplasms”, “Diagnosis”, and “Prognosis”. The Boolean operator (“AND”) was used to link the search terms with the research question. The search strategy applied to locate studies in the main electronic database was as follows: ((machine learning[MeSH Terms]) AND (uterine cervical neoplasm[MeSH Terms])). The scope of the bibliographic search was expanded based on the reference lists of the retrieved articles. Figure 1 summarizes the flow diagram of the search process followed in this review.

2.2. Study Selection

All searched articles were assessed to determine their eligibility for inclusion in the review. First, the studies were assessed by reading their titles/abstracts. Papers considered relevant after this initial review were selected for further evaluation through full text reading. Four authors manually screened the selection of the original articles, and one of them acted as a content reviewer, being a specialist in the area. After evaluation of the full text, 153 articles were deemed eligible and included in this scoping review.

2.3. Eligibility Criteria

All the primary studies retrieved were reevaluated to assess whether they met the inclusion criteria for the scoping review, as outlined below.

2.4. Study Design

Cross-sectional and case–control studies, cohort studies, and clinical assays with original data reporting the use of ML and DL for the prediction, diagnosis, and prognosis of CC.

2.5. Language

The search was limited to studies published in English.

2.6. Publication Date

Publications available online from January 2015 to April 2025 were considered. The evidence included in this scoping review was predominantly within the past decade, the period during which machine and deep learning have been most widely applied in CC research.

2.7. Exclusion Criteria

During the full-text assessment of the articles, we evaluated the study design, study model, research settings, paper quality, and relevant outcomes. Review, special report, and editorial papers; papers with different types of cancers as outcomes; papers without machine and deep learning applications; and papers with a low quality score were omitted.

2.8. Data Extraction

Four authors independently gathered all pertinent data using a data extraction template created in Microsoft Excel (Microsoft Co., Redmond, WA, USA). The extraction template included details such as the name of the first author of each study, publication year, reference, study type, study model, study design, aim of the study, medical use, sample group, source population, age group, data source, type of patient, clinical specialist participation, process in which the clinical specialist participated, database, multicenter study, ML and DL algorithm implemented with the best performance, the best performance metrics, data augmentation technique, cross validation technique, external validation, study’s limitations, available code, available data, available app/web application, and criteria to evaluate the study’s quality. Each section was filled out in a single column, with rows containing data from each primary study. Any discrepancies during the data extraction process were identified and addressed through discussion.

2.9. Outcome Measurement

Any use of ML and DL for the diagnosis, prognosis, and treatment of CC was analyzed and treated as an outcome.

2.10. Quality Assessment

The quality of the primary studies included in this scoping review was evaluated using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) scale, designed for assessing the quality of observational studies [34]. To evaluate the quality of a study, an 11-item scale based on the STROBE principles was created through consensus among the 11 authors. Each item was categorized according to the introduction, methods, results, and discussion sections. The items addressed aspects such as study design, recruitment, participant description, and overall quality. The score was represented in arbitrary units (a.u.) ranging from 0 to 23 (from lower to higher quality). The authors assessed the quality independently, and any disagreements were resolved after discussion. The quality analysis used in the scoping review is described in Table 1.

2.11. Synthesis of Results

The research team reviewed and discussed the potential applications of ML or DL for the diagnosis, prognosis, and treatment of cervical neoplasm based on the data included. Descriptive analyses were provided as narrative summaries, given the heterogeneity of the literature. A narrative summary involves presenting findings in a straightforward way [35].

2.12. Statistical Analysis

The data were gathered into a single spreadsheet and imported into Microsoft Excel 2019 (Microsoft Co., Redmond, WA, USA) for validation and coding. Fields that allowed string values were checked for unrealistic entries. The data were subsequently exported to STATA version 16 (StataCorp, College Station, TX, USA) for analysis. Descriptive statistics were used to summarize the data, and frequencies and percentages were calculated to summarize the data. Frequencies and percentages were utilized to describe nominal data.

3. Results

3.1. Included Studies

Our electronic search retrieved 297 study records. After title and abstract screening, 246 articles were retained for full-text review. Of these, 93 were excluded, resulting in a total of 153 unique articles included in our scoping review. These research articles were published between January 2015 and April 2025.

3.2. Quality Assessment of Included Studies

The mean (±SD) quality of the 153 selected observational studies was 15 (±0.6) a.u. (range 10 to 23), indicating a satisfactory level. Table 2 describes the quality assessment of individual studies using the STROBE scale.

3.3. Clinical Applications of ML and DL in Cervical Cancer

Figure 2 summarizes the clinical applications found in the reviewed literature grouped as follows: diagnosis (54.9%), prognosis (22.9%), and treatments (22.2%). For diagnostic applications, we found that most studies addressed the development of models to diagnose lesions in colposcopy images (e.g., [75,81,133]); we also grouped the studies that addressed diagnosis into six main targets: (i) states of CC (32.02%) [74,75,81,93], (ii) screening for CC (20.26%) [39,41,43], (iii) recurrence (cancer progression) (0.65%) [62], (iv) type prediction of Human Papilloma Virus (HPV) (0.65%) [54], (v) cancer progression (0.65%) [62], and (vi) segmentation of targets and organs at risk (OARs) (0.65%) [132].
For prognosis application, most studies focused on predicting cancer progression using CT and MRI (e.g., [77,78,156]); in particular, three main targets were identified in prognosis: (i) cancer progression (13.72%) [56,61,84], (ii) survival prediction (5.22%) [55,58,98], and (iii) recurrence (3.26%) [176] (e.g., [77,137,178]). For treatment applications, most of the studies focused on automatic segmentation of images to predict doses of treatments using CT (e.g., [68,131,141]). In this clinical application, four main targets were found: (i) therapeutic dose and planning (9.15%) [118,144,146], (ii) segmentation of targets/OARS (8.49%) [109,130,140], (iii) delineation of the clinical target volume (CTV) (3.26%) [115,141,153], and (iv) toxicity prediction in radiotherapy (0.65%) [101]. Figure 3 shows the distribution of targets for each clinical application based on ML and DL in CC.

3.4. Prediction Tasks in CC

Figure 4 displays the frequency of ML and DL objectives in the present selection study. Classification tasks were the most frequently employed, appearing in 108 studies (70.77%), followed by image classification in 34 articles (22.07%); 11 articles employed regression tasks (7.14%). For instance, one study [65] used DL classification to diagnose CC from magnetic resonance imaging (MRI) processing, the results of which were evaluated by experienced radiologists. After evaluation, the authors concluded that the performance of their algorithm was equivalent to that of experienced radiologists. Another study [64] trained DL models for automatic segmentation of OARS from computed tomography (CT) scans. The authors remarked that their algorithm is useful for automatic dose optimization for advanced radiation therapy strategies. Among the regression tasks, one study [98] developed an ML algorithm in the prediction of survival with CC from analyzing patient demographics, vital signs, laboratory test results, tumor characteristics, and treatment types. The authors concluded that their algorithm achieved a superior performance compared with a Cox proportional hazard regression model.

3.5. Models Trained in Classification, Segmentation, and Regression Tasks

Figure 5 presents the frequency of models trained using prediction task. Convolutional neural networks (CNNs) were the most frequently employed in the three prediction tasks analyzed. For classification tasks, CNN was the most used, with 40 articles [40,41,54], followed by support vector machine (SVM) [78,104,105], random forest (RF) [94,165,174], ResNet [52,53,72], and k-nearest neighbors (KNN) [81,138]. For instance, Liu et al. [41] described that an algorithm for cytopathology cell image classification based on CNN indicated that their algorithm was robust to changes in the aspect ratio of cells in cervical cytopathological images. Similarly, we found that CNN-based models were the most commonly used in image segmentation tasks, with 28 identified articles. For this task, Yoganathan et al. [32] presented an algorithm based on CNN for automatic segmentation of targets and OARs in MRI, achieving high performance for the automatic contouring, which could be useful in brachytherapy for CC. Finally, CNN-based models were the most common for regression tasks, with six articles. In this case, Yuan et al. [37] proposed a CNN algorithm for predicting doses in intensity-modulated radiation therapy (IMRT) plans. They concluded that there were no significant differences in dose parameters between automatic and real clinic plans.

3.6. Temporal Analysis of ML and DL Implemented in CC

Figure 6 presents the distribution of models grouped by technique: ML, DL, and the fusion of ML and DL. Analysis of the 153 reviewed studies showed that the most common implemented technique was DL, with 61.2% (93 studies), whereas ML was implemented in 34.2% (53 studies), and the fusion of ML and DL was used in 4.6% (7 studies). In particular, fusion involves training models with different modalities separately (e.g., images, text, time series). After this training, the goal is to integrate the outputs of these models to improve performance and accuracy in the prediction task. For instance, Mathivanan et al. [160] proposed a hybrid methodology for feature extraction in cytology images based on DL, which was transferred to ML models for CC classification to obtain more accurate and timely interventions.
Figure 6B describes the number of studies by year of publication. In particular, we observed that 2019 to 2024 had the highest number of publications (138 studies) compared to 2015 to 2018 (13 studies). We also identified that 2023 had the largest number of studies (with a total of 29) compared to the other years. In general, it is observed that the number of articles published using DL increases every year. Moreover, we noticed an increase in studies that merge DL and ML techniques in CC prediction.

3.7. Analysis of ML-Based Models Implemented in CC

A comparison table for CC prediction with ML techniques is given in Table 3. We found that in most studies, the accuracy was the metric most used to measure the performance of the models; we noted accuracies over 90% in most classification tasks [148,170]. In contrast, for image segmentation, the Dice similarity coefficient (DSC) was used to measure the similarity between two sets or samples. This metric is commonly used in image analysis, particularly for evaluating the performance of image segmentation models.
We also analyzed the validation strategies reported in the literature, namely, cross-validation (CV) with the following: (i) 3 folds, (ii) 5 folds, (iii) 10 folds, and (iv) leave-one-out cross-validation (LOOCV). From 53 studies that used ML models, 19 studies used 10 folds (35.84%), 12 studies used 5 folds (22.64%), and 2 studies used 3 folds and LOOCV, representing 3.77%. We also noted that 33.96% of the studies did not report any validation strategy. Finally, we noticed that of all the articles with ML (53), only 5 studies (9.43%) performed external validation (e.g., [113,166,172]).

3.8. Analysis of DL-Based Models Implemented in CC

Table 4 presents a summary of the studies that used DL models for CC prediction. Related to performance, we observed a variety of metrics used, such as (i) accuracy, sensitivity, F1 score, precision, and area under the ROC curve (AUC) for classification tasks, (ii) DSC, Jaccard similarity, and accuracy for image segmentation, and (iii) mean absolute error (MAE), dose volume, and DSC for regression tasks. On the other hand, the most common validation strategy was 5 folds in 19 studies (20.43%), followed by 10 folds in 6 studies (6.45%). We also observed that 60 studies (64.51%) did not report any strategy of validation. Moreover, 13 studies (13.97%) did an external validation to evaluate the performance of their models (e.g., [116,118,153]).

3.9. Analysis of Fusion of ML and DL Models Implemented in CC

Table 5 describes a comparison of studies that used ML and DL for CC. For classification tasks, we found the integration of ResNet (DL) with logistic regression (ML) [160], CNN (DL) with SVM (ML) [49], and recurrent neural networks (RNN—DL) with SVM [169], whereas ResNet with SVM [184] and U–Net with SVM [48] were identified by image segmentation. In all the studies in this group, the accuracy was used to measure the algorithm performance, and the validation strategy most used was 5 folds. We also did not find studies that evaluated their models with external data.

3.10. Evaluation Metrics for ML and DL in CC

Figure 7 presents the metrics used to evaluate the models of the selected studies. The accuracy metric was the most used, in 77 studies (50.32%), followed by area under the curve ROC (AUC) in 25 studies (16.33%), and DICE in 23 studies (15.03%). The DICE metric was used in the image segmentation task, while accuracy was used in classification tasks. Other less common metrics were precision (2.6%), mean absolute error—MAE (1.96%), and C–index with 0.65%. In particular, each study used a variety of metrics to measure their model performance in classification, image segmentation, and regression tasks using ML and DL.

3.11. Explainability

Figure 8 shows the several types of explainability tools employed in the reviewed studies. We found that the visualization of image segmentation was commonly used to explain the output of the models (21.2%), followed by Gini coefficients (6.0%), survival analysis (4.6%), Gradient-weighted Class Activation Mapping Grad-CAM (3.3%), and SHAP values with 2.6%. For instance, Ming et al. [139] showed the visualization of CC detection between the prediction result and ground truth under different modal images. In contrast, Mehmood et al. [45] described the most important features associated with CC detection using Gini coefficients. Similarly, Matsuo et al. [98] listed the most important covariates based on the statistical significance associated with CC prediction, whereas Kurita et al. [122] presented a DL-based algorithm for classifying cervical cytological screening; using a Grad-CAM approach allowed visualization of the image components that contributed most to the classification. He et al. [164] utilized the SHAP values approach for the explainability analysis of their predictive model to rank variables in the prediction of cervical precancerous lesions. Interestingly, more than half (62.3%) do not use any explainability approach, which means that these works only presented probabilities without using any tool to open the “black-box” (e.g., [38,46]).

3.12. Databases

Figure 9 highlights the databases used in the development of ML and DL models in CC. Cytology images were the most frequently employed, appearing in 33 studies [39,41], followed by CT images in 30 studies [153,156], MRI images [32,55] and clinical history [43,45] in 23 studies each, and colposcopy images [44,162] in 17 studies.
In particular, the cytology dataset consisted of a set of images to examine abnormal cells that could indicate pre-cancerous changes or cancer [50,63,143]. The CT dataset included images to stratify advanced disease by evaluating lymph nodes [68,156]. For MRI, we found a set of images which were used to evaluate the tumor size, local invasion, and lymph node studies each, as well as colposcopy image involvement [172,182,183]. Related to clinical history, this group can include demographic details, lifestyle, historical medical records, risk factors, and family history of cancer [43,100,164]. Colposcopy images were used to predict and classify cervical lesions [53,70,162]. In the case of DNA/RNA datasets, they included genome sequences, genotyping of cytokines, and T-cell Receptor Sequencing [54,56,62]. Dose volume data refer to the clinical dose administrated to patients [95,124,144]. In histopathology, the dataset consisted of a set of whole-slide biopsy images used to predict lymph node metastasis [118,135].
The use of spectral and genomic data remains less explored in CC predictive modeling. However, spectral data has been primarily employed for diagnosis, while genomic data has been more frequently used for prognosis. Although research predominantly focused on diagnostic applications, images remained the most frequently used input data across all three medical applications. Specifically, 68.24% of diagnostic studies (58/85) used images, 45.71% of prognostic models (16/35) incorporated image-based data, and 70.58% of treatment models (24/34) utilized image inputs.

3.13. Limitations

Figure 10 summarizes the main limitations found in the reviewed literature. In particular, the limitation most commonly found was the limited number of patients/samples (48.6%) [38,182], followed by a lack of diversity in the dataset (11.4%) [144,156], lack of external validation (7.6%) [132,154], and data from a single data center (6.7%) [164,184]. The least common limitations were image noise [52,142] and uncertainty in contour delineation, each identified in 1% of all studies [59,87].

3.14. Reproducibility

Of the 153 studies reviewed, we found that only 6 studies (3.92%) have their programming codes available, compared to 145 studies (94.77%) whose codes are not available. For instance, Wentzensen et al. [66], Jeong et al. [150], and Cheng et al. [51] used a free platform to store and share the code of their DL-based models. In particular, these authors shared the steps to replicate their programming environment, the prerequisites list, requirements (hardware and software), implemented model code, and steps for training, evaluation, and inference. The code available in these studies promotes their reproducibility and can accelerate the research of other scientists.
We also analyzed the distribution of available data and found that only 20 studies (13%) mentioned that their data used during training and testing are freely available. In contrast, 35 studies (22.9%) stated that their data is available upon request, and 98 studies (64.1%) indicated their data is not available. In particular, the reason those studies mention that their data is available is because they used public databases, such as SIPaKMeD [41], Herlev [22,52,82], and Cytology Image Challenge [50], which are Pap Smear datasets. Figure 11 shows the distribution of studies with available code and data.

3.15. Distribution of Publications by Country

In Figure 12, the distribution of publications by country, research origin, and medical purpose is presented. China had the highest number of studies, totaling 85 (40 focused on diagnosis, 21 on prognosis, and 24 on treatment), followed by the United States with 30 (16 for diagnosis, 4 for prognosis, and 10 for treatment); India conducted 12 studies (10 on diagnosis and 2 on prognosis), while Republic of Korea carried out 9 studies (6 on diagnosis, 2 on prognosis, and 1 on treatment). We observed that countries such as Brazil, Germany, Portugal, Australia, Canada, Nigeria, Norway, Turkey, Saudi Arabia, Ethiopia, Rwanda, Pakistan, Bangladesh, Singapore, and New Zealand presented proposals that addressed only the diagnosis of CC. In contrast, proposals from Czech Republic, Finland, and Iraq addressed only the prognosis of CC. Finally, Sweden, Iran, and Qatar presented studies only for treatments.
Regarding nationality and predictive modeling objectives, only studies involving China, the United States, and France developed models for all three medical applications (diagnosis, prognosis, and treatment). Populations from Japan, Republic of Korea, India, and Thailand had models for diagnostic and prognostic purposes [36,42,56,178], while studies on the Taiwanese population focused on prognosis [88] and treatment [85]. Other populations were exclusively studied for the diagnosis of CC [82,99,148].

3.16. Study Design Characteristics

Regarding the characteristics of the reviewed studies, all of them were observational in nature. As summarized in Figure 13, the most common study design was cross-sectional (76.47%), followed by cohort studies (11.76%), retrospective cohort studies (11.11%), and a single case–control study (0.65%).
The majority of cross-sectional studies were used to develop diagnostic models for CC (66.67%), followed by models related to treatment (22.22%), and prognostic models (11.11%). For cohort and retrospective cohort studies, the most frequent medical application was prognostic (modeling predicting cancer progression to lymph node metastasis [77,100,136], recurrence [165,169,180], survival [55,58,98], or treatment response [113,175,184]), representing 50% and 88.24%, respectively. This is related to the longitudinal nature of medical care and disease progression. These were followed by treatment-related models (dosage, therapeutic planning, toxicity), with 16.67% for cohort studies and 11.76% for retrospective cohort studies. Finally, 33% of cohort studies were focused on diagnosis, and the only case–control study was developed to predict diagnosis.

3.17. Study Population

A total of 14.38% of the reviewed studies did not report or provide information on the nationality and ethnicity of the study population, while 56.86% did not report the age of recruited patients [58,60,93,94]. We identified that the study populations in the reviewed works originated from 24 different nationalities. Notably, the Chinese population was the most studied, representing 43.14% of the analyses, followed by populations from the United States, Japan, Republic of Korea, and Denmark (Figure 14). The frequency with which some populations were studied is related to the availability of open access datasets. Some studies even used multiple open access databases containing data from populations of different nationalities (e.g., studies [52,82,151]). A relevant example is the Herlev database, which consists of cytological images from Danish patients collected by Herlev University Hospital, leading to limited data diversity across studies.

3.18. Medical Specialist Involvement in Predictive Model Development

To assess the level of medical expert involvement, we explored their participation in study design, implementation, and evaluation. In 40.52% of the studies, there was no explicit reference to the participation of medical specialists. Some studies relied on secondary sources, classifying data based on medical records and notes (e.g., studies [48,91]). However, other studies using secondary data explicitly mentioned specialist involvement for validation or relabeling of patient-sample classifications. Examples include studies by Sornapudi et al. [82] and Shanthi [22], which reclassified cytological images with the assistance of a cytotechnologist and pathologist, respectively. Figure 15 highlights the most frequently referenced profiles across the reviewed studies. In particular, this figure provides a summary of the five most frequently involved medical specialties or specialists, categorizing their participation in different study phases, such as study design (patient recruitment and follow-up), label assignment (cancer staging, delineation, segmentation of key regions), and evaluation (prediction comparison). Some studies involved multiple specialists, such as two pathologists with different levels of experience, validating classification in the same study, or multiple medical profiles [65,83,86], such as an oncologist and radiologist, each performing different tasks within the same study [74,82,84].

4. Discussion

This scoping review identified that slightly more than half of the studies addressed the application of ML and DL in CC for diagnostic purposes, followed by prognosis and an incipient focus on CC treatment (Figure 16). Within the levels of cancer prevention, it is striking that AI in CC has been aimed at covering needs at the levels of secondary prevention (screening and timely diagnosis) and tertiary prevention (treatment); no study was found with an application approach at the level of primary prevention (HPV vaccines). Less than 20% corresponded to cohort or case–control studies, and another notable finding is that no studies were found that were carried out in Latin America or with a Latino population. To date, few reviews have analyzed the application of ML and DL in CC.
William et al. presented a synthesis of 30 studies up to 2018 that focused on automated detection and classification of CC from pap smear images [185]. A high accuracy of over 90% was reported for existing models, especially for classification as normal or abnormal. However, low accuracy was reported for the classification of some cell classes that cannot be easily identified as normal or abnormal.
Since the manual analysis of pap smear images and microscopic biopsies by a trained cytologist is time-consuming and tedious, as cytotechnologists typically examine a large amount of data daily, the application of ML and DL for CC diagnosis has focused on optimizing the accuracy of CC detection and classification [185]. Similarly, DL models have been developed for dual staining of p16 and Ki-67 (markers that are closely related to cervical carcinogenesis) instead of cytological image identification, considering that the increase in the number of dual-stained cells correlates with increased severity of histopathology, with equal sensitivity and higher specificity compared to cytology and manual evaluation of double-stained slides and a reduction in referrals to colposcopy [66].
Another review published in 2022 focused on presenting the various ML models that had been used for CC prediction until November 2020. In this review, RF, DT, adaptive boosting, and gradient boosting models were reported to have 100% classification scores for CC prediction and 99% accuracy with SVM [186]. Another use of DL has been the segmentation of CC CT images. Radiation therapy is an effective way to improve the survival rate of patients with CC [67,187], especially for patients with locally advanced CC and those whose physical condition is not suitable for surgery. A systematic review and meta-analysis published in 2022 reported good accuracy in the automatic segmentation of CT images in CC with lower time consumption [188].
Manual segmentation of the CTV by a physician is still the standard, but it is a time-consuming and intensive task, taking an experienced physician at least 30 min [188]. Automatic segmentation has shown great potential in reducing physician burden, decreasing patient waiting time, and improving cancer treatment. However, for its use in future radiotherapy applications, high-quality public databases and large-scale research verification are required [188].
On the other hand, another review published in 2023 compiled the synthesis of 13 studies published until October 2022, which evaluated the use of ML to predict survival in patients with CC [189]. The range of the area under the curve reported for overall survival was 0.40–0.99; for disease-free survival, 0.56–0.88; and for progression-free survival, 0.67–0.81. In addition, the combination of heterogeneous multidimensional data with ML techniques demonstrated potential in predicting CC survival. However, despite the benefits of ML, interpretability, explainability, and imbalanced datasets remain some of the biggest challenges. Therefore, for the use of ML models as a standard for survival prediction, further studies are required.
To reduce the risk of mortality from CC, faster diagnosis and accurate prognosis prediction are required. The incorporation of ML and DL models into the health-care system has shown great potential as a support tool in oncology clinical practice [7], and specifically in CC, these AI tools could help in the diagnosis and prediction of the prognosis of the disease as well as shortening the time consumed in the initiation of treatment. Specifically, in Mexico, AI models would be applied within colposcopy clinics where the colposcopic evaluation and biopsy are performed for the diagnosis of precursor lesions, which are the link to obtaining the result of the abnormal screening that will be used to refer patients to oncology clinics to receive treatment. The main challenges, limitations, and future directions identified in the reviewed literature are described below from two perspectives: computational (ML and DL) and clinical challenges.

4.1. Computational Challenges of Using ML and DL in CC

In recent years, ML and DL models have made significant progress in medicine and health care, primarily in the development of tools that can assist in clinical decision-making, such as diagnostics of diseases, targeted drug therapy, improvement of progress in clinical trials, and so on [190,191]. This progress has been brought about by three important milestones: (i) the advances in highly parallelizable graphics processing units (GPUs) that enable faster training and increased efficiency of DL models, (ii) development of programming frameworks (e.g., Keras, TensorFlow, PyTorch) that help abstract complex details in neural network implementation by providing libraries available to define network types (e.g., CNN, RNN) and common model architectures, and (iii) advances in DL architectures capable of analyzing multimodal data (e.g., images, text, signals, timeseries) [192,193,194]. Although these milestones mark significant progress, several challenges remain to be addressed to fully realize the potential of ML and DL in the diagnosis, prognosis, and treatment of CC. Table 6 summarizes these limitations, which are described below.

4.1.1. Data Availability

ML approaches, especially DL, typically require large datasets for training in order to learn patterns and achieve high performance [195,196]. A recent review describes DL techniques used in the field of pap smear whole slide imaging (WSI) classification analysis [31]. Most studies use supervised learning to extract features and train their models to identify normal, benign, and malignant cells from medical images. However, the main challenge is that it requires a large amount of labeled data that is not readily available. Unfortunately, many applications in CC have few or limited data to train DL models. As observed in the literature review, the limited number of patients and samples was the main limitation in almost 50% of all studies reviewed. Some strategies to address this challenge include the following: (i) synthetic data generation based on DL models, such as variational autoencoders [197] and generative adversarial network [159], (ii) oversampling for the minority class based on Synthetic Minority Oversampling Technique (SMOTE) [94,167], and (iii) image transformations based on geometric, intensity, and spectral changes.

4.1.2. Data Leakage

Data leakage is a common problem, especially in applications based on ML and DL. This problem is caused by (i) inappropriate feature selection where the selected features are highly correlated with the target, (ii) errors during data preprocessing caused by incorrect data splitting, and (iii) filling missing data with the entire data set [198,199]. One of the main issues of data leakage is inflated performance metrics, such as high accuracy and precision [200]. Some strategies to avoid data leakage in ML and DL studies include [201]: applying data preprocessing to the training and test sets separately (e.g., scale, missing data), properly splitting the training and test sets, ensuring that data from the training set does not appear in the validation or test set, and using cross-validation.
Cross-validation is a technique for evaluating ML and DL models that is crucial to evaluate model performance and prevent overfitting. Overfitting occurs when a model overfits the training data, causing that model to perform extremely well on the data used for training but poorly on new, unknown data [202]. Splitting data into training, validation, and test sets helps to prevent overfitting, to tune hyperparameters, to select the best selection, and to improve the model generalization. Future research should adopt best practices for data splitting to avoid over-optimistic results, mainly in the clinical area.

4.1.3. Limited External Validation

External validation involves assessing whether the predicted model can be adopted in clinical practice [203]. According to Santos et al. [204], models should be evaluated on datasets separate and independent from the data on which the models were trained to increase confidence in their predictions and assess their clinical utility. In the literature review, we found that only 20 (13.07%) studies externally validated their models, meaning that these models may not be generalizable or may not perform well in populations other than those used in training. As noted, the lack of external validation is a major limitation of the reviewed studies; therefore, future research should include external validation to ensure the clinical relevance and generalization of the models.

4.1.4. Limited Evaluation of Model Performance

Performance metrics are important in evaluating the success or failure of developed models using ML and DL [205]. The metrics help to evaluate the predictive power of a developed model as well as to assess the reliability and rigor of a study [206]. In the reviewed studies, the most common metric used was accuracy, in 77 studies; however, in the clinical area, the confusion matrix is crucial for evaluating the performance of classification models, due to identifying the balance between false positives and false negatives [207]. It is important to highlight the importance of confusion matrices in classification tasks, which should be included in future research for both validation and test sets. If possible, confusion matrices for external validation sets should also be included. These matrices allow for detailed analysis of model performance, especially with respect to false positives and false negatives, which are crucial in the clinical setting.
The learning curve is another common metric for evaluating the performance of a machine learning algorithm. This curve assesses the model’s convergence and diagnoses’ overfitting or underfitting during training [208]. Similar to the confusion matrix, the learning curves should be included in future studies to assess the convergence of the model, identify overfitting/underfitting, provide transparency, and allow readers to assess the generalization of the models.

4.1.5. Complex Data

One of the most common challenges in developing ML and DL models in CC is the complexity and high dimensionality of the data. As observed in Figure 9, the data can be represented as images, time series, spectral data, clinical history, laboratory tests, treatment plans, and so on. Each of these data contains valuable information; however, processing each type of data involves performing specific preprocessing tasks. For instance, the image processing can include resizing, normalization, noise reduction, and geometric transformations (for data augmentation) with the objective of improving the quality of the images [49,52,59,60]. In contrast, for clinical history, the processing tasks can involve feature selection, data standardization, missing data, and sometimes natural language processing in free-text notes [46,94,108]. Data in clinical settings are characterized by their high dimensionality and complexity; future research should adopt best practices to address this issue by applying specific preprocessing tasks to develop ML and DL-based solutions.

4.1.6. Privacy Issues

Clinic data contain highly sensitive and confidential information that identifies the patient and their medical condition or treatment [209,210]. However, several studies have pointed out that DL-based models can be subjected to reverse engineering techniques in order to extract sensitive information used during training [211,212]. The development of technologies that integrate cybersecurity tools is necessary to assess these risks and prevent them, ensuring patient privacy.

4.1.7. Lack of Explainability

In recent years, the performance of ML- and DL-based models has improved dramatically; however, they still often suffer from poor explainability, which limits their applicability in critical areas [213]. Explainability refers to the characteristic that ML and DL-based models must present to be interpreted by a human [213]. One of the main issues of DL-based models is that they are “black-box”, which means they are non-intuitive and difficult for people to understand [214]. This challenge generates a barrier to the application of these models in clinical practice due to lack of interpretability, trust, and transparency [214,215]. In this review, we found that 62.3% of the studies do not use any explainability approach to clearly explain the model’s decision-making (Figure 8), which could limit their use in clinical areas. Future work should consider the use of explainability approaches to increase the confidence and applicability of ML and DL-based models in real-world scenarios.

4.1.8. Lack of Reproducibility

According to Beam et al. [216], reproducibility is a prerequisite for the creation of new knowledge and scientific progress. A study is considered reproducible when an independent team, provided with the original data and code, is able to achieve the same findings as those reported in the initial research [216]. In the reviewed literature, we found that few studies share their code and data (Figure 11), which represents a challenge to reproducing these studies. Mislan et al. [217] mentioned that making code and data available allows analyses to be more easily reproduced and can accelerate the research of other scientists. It is important that future research promotes reproducibility and reuse of codes, which could improve the quality and accelerate the pace of CC research.

4.2. Clinical Implementation Challenges of Using ML and DL in CC

From the clinical perspective of the specialists who are the authors of this review, the application of ML and DL closest to CC is in diagnosis as a support tool for secondary prevention of the disease in the diagnosis of premalignant lesions. However, studies are required to demonstrate their reproducibility and internal and external validity for their long-term applicability in therapeutic management and for the monitoring and prediction of patients’ prognoses. In summary, the use of ML and DL in oncology care is a turning point in advanced medical care, and their application in CC will have a deep impact on the lives of CC patients when their validity and reproducibility make them available for use in the clinical setting and data can be translated into therapy. This will especially be the case in more remote environments that are less favored in terms of care and face a shortage of personnel: cytologists, pathologists, and colposcopists. Table 7 describes the limitations for clinical implementation.

4.2.1. Representativeness of Clinical Stages of CC in the Training of DL-Based Models

Although molecular HPV detection testing is the recommended primary screening method for CC [218], and in some countries, cytology combined with screening for high-risk HPV subtypes at five-year intervals is recommended to improve the positive predictive value of screening, there are still excessive referrals for colposcopy due to the high prevalence of transient HPV infections [219]. In resource-limited settings, the use of alternative low-cost methodologies such as visual inspection with acetic acid (VIA) is indicated; however, it has low accuracy and reproducibility, so the development of automated visual assessment based on DL of cervical images could help improve the accuracy and reproducibility of VIA as an assistive technology. However, this technology requires some clinical and technical considerations for it to be widely used as a clinical test, including the training of deep learning-based models with representative images of each of the four distinct biological stages of CC: normal cervix, high-risk HPV infection, precancer, and invasive CC [220]. In this regard, Desai et al. described the essential clinical and technical considerations involved in building a DL-based automated visual evaluation (AVE) tool validated for broad use as a clinical test [220]. In particular, it emphasizes the need for a number of representative images with truth labels necessary to build an accurate but generalizable DL-based automated visual assessment algorithm for cervical images using a DL approach.

4.2.2. Privacy Concerns and Data Security in Health Care

As described above, ML and DL are AI applications that can help identify patterns and trends in health data to improve the diagnosis and treatment of CC. However, since a large amount of personal data is used to train AI models, there is a risk to patient privacy and security, as sensitive medical data are collected and stored, increasing the risk of cyberattacks and information breaches. Therefore, the guidelines for the responsible use of AI in public health developed by the World Health Organization (WHO) and FDA should be considered [221]. These guidelines include the need for a careful assessment of the risks and benefits of AI in health care, transparency in the development and use of AI, and the need to ensure equity and nondiscrimination in health care.

4.2.3. Integration of AI with Clinical Workflows for Real-Time Decision-Making

According to the WHO, AI-assisted cervical cancer screening improves techniques involving the visual evaluation of digital images [222]. This is particularly relevant in low-resource areas with a shortage of specialized personnel. However, the challenge lies in access to computing resources and services for implementing ML and DL models on mobile devices or laptops in remote areas and in the willingness of health care professionals to adopt AI and make real-time clinical decisions when analyzing a specific case [223].

4.2.4. Ethical and Regulatory Considerations

Among the key ethical considerations in the clinical implementation of digital technologies for CC screening using ML and DL are safety monitoring, data privacy, traceability, accountability, and patient protection. These considerations are essential to uphold the principles of beneficence, nonmaleficence, autonomy, and justice [224]. Respecting patient autonomy supports the safeguarding of privacy, the maintenance of confidentiality, and the assurance of informed and valid consent to protect individuals’ data. AI technologies should also be as explainable as possible and tailored to the level of understanding of their intended users. Moreover, these systems must undergo continuous, systematic, and transparent evaluation to ensure their effectiveness and appropriateness in the specific contexts in which they are deployed. From a regulatory perspective, ML and DL models often face challenges, such as limited generalizability beyond their training data and vulnerability to bias. Therefore, health care institutions must develop comprehensive strategies for AI implementation that address the management of technological infrastructure, cost considerations, and the integration of AI systems into existing clinical workflows [224].

4.2.5. Public Perspectives on Using AI in Diagnoses Decisions

Considering that the primary application of ML and DL in CC is in screening, and that current AI models achieve an estimated classification accuracy of 85% for cervical lesions, several critical needs must be addressed. These include ensuring algorithm accountability, improving model performance to increase specificity, and enhancing the accurate classification of premalignant cervical lesions to reduce the risk of misdiagnosis. Furthermore, robust validation of AI systems is essential to demonstrate their effectiveness at the population level [225].

4.3. Limitations

Our scoping review has some limitations. This scoping review is a qualitative review of the literature reported up to April 2025. Furthermore, this study focused only on ML and DL approaches. We conducted a search in four selected databases (IEEE, Web of Sciences, Scopus, PubMEd).

5. Conclusions

This scoping review gathered relevant scientific evidence on ML and DL applications in CC. AI in the prevention, diagnosis, and treatment process of CC would support timely medical care in hard-to-reach areas with less human capital and help doctors and the health system to reduce administrative burden, care time, professional burnout, and human error. It would also improve the correlation of histological study and the precision of diagnosis and therapeutic management. From 153 reviewed studies, we presented the main challenges and future works from two perspectives: computational and clinical challenges. From the computational view, we identified eight challenges: data availability, data leakage, limited external validation, limited evaluation of model performance, complex data, privacy issues, lack of explainability, and lack of reproducibility. From the clinical view, we found five challenges: representative of clinical stages of CC, privacy concerns, integration of IA with clinical workflows, ethical considerations, and public perspective.
Although ML and DL’s potential use in the diagnosis, prognosis, and treatment of CC has been reported with a significant future impact on oncology clinical practice that allows data to be translated into therapy, more evidence of validity and reproducibility is required for their use in the early detection, prognosis, and therapeutic management of CC. It is important to emphasize that the implementation of AI in public health must be careful and ethical, ensuring that patients’ rights and privacy are respected.

Supplementary Materials

The following supporting information is available online at: https://www.mdpi.com/article/10.3390/diagnostics15121543/s1. Supplementary Table S1: Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist.

Author Contributions

Conceptualization and methodology, K.T.-P. and B.V.; software, B.V.; validation, J.M.-A., C.E.A.-F., J.A.E.-G., S.S.-L. and L.d.C.C.-P.; formal analysis, investigation and data curation, M.R.-G., J.I.R.-E., B.V. and K.T.-P.; writing—original draft preparation, M.R.-G., B.V. and K.T.-P.; writing—review and editing, M.R.-G., B.V., K.T.-P., J.A.E.-G., M.B.-R., V.M.-M. and L.d.C.C.-P.; visualization, B.V.; supervision, project administration and funding acquisition, K.T.-P. and B.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Instituto Nacional de Salud Pública, Mexico, and grants from the Consejo Nacional de Humanidades Ciencias y Tecnologías (CONAHCYT, Mexico) to K.T.-P. (CONAHCYT-CBF 2023-2024 grant#3306) and V.M.-M. (grant CATEDRAS-2014-C01-#245520). This work was supported by the UNAM Postdoctoral Program (POSDOC) to B.V.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created.

Acknowledgments

This work was submitted in partial fulfillment of the requirements for the MPH degree of J.I.R.-E. from the Master Program in Public Health of the School of Public Health of Mexico.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ACCAccuracy
AIArtificial intelligence
AUCArea Under (AUC) the Receiver Operating Characteristic (ROC) curve
CCCervical cancer
CLFClassifier
CNNConvolutional neural network
CTComputed tomography
CTVClinical target volume
DCNN             Deep convolutional neural networks
DLDeep learning
DRSDiffuse reflectance spectroscopy
DTDecision tree
FSFeature selection
GPCGaussian process classifier
GMMGaussian mixture model
HPVHuman Papilloma Virus
KNNK-Nearest neighbors
LDALinear discriminant analysis
LRLogistic regression
Mask R-CNNMask regional CNN
MGIMagnetic resonance imaging
MLMachine Learning
MLPMulti-layer perceptron
NBNaive Bayes
OAROrgan at risk
PCAPrincipal component analysis
RFRandom forest
RNNRecurrent neural network
SVMSupport vector machine
ViTVision transformer

References

  1. International Agency for Research on Cancer. Cancer Today. Available online: https://gco.iarc.fr/today/en (accessed on 28 May 2024).
  2. Momenimovahed, Z.; Mazidimoradi, A.; Maroofi, P.; Allahqoli, L.; Salehiniya, H.; Alkatout, I. Global, regional and national burden, incidence, and mortality of cervical cancer. Cancer Rep. 2022, 6, e1756. [Google Scholar] [CrossRef] [PubMed]
  3. World Health Organization (WHO). Global Strategy to Accelerate the Elimination of Cervical Cancer as a Public Health Problem; World Health Organization (WHO): Geneva, Switzerland, 2020; Available online: https://www.who.int/publications-detail-redirect/9789240014107 (accessed on 28 May 2024).
  4. Perkins, R.B.; Wentzensen, N.; Guido, R.S.; Schiffman, M. Cervical Cancer Screening: A Review. JAMA 2023, 330, 547–558. [Google Scholar] [CrossRef] [PubMed]
  5. Jha, A.K.; Mithun, S.; Sherkhane, U.B.; Jaiswar, V.; Osong, B.; Purandare, N.; Kannan, S.; Prabhash, K.; Gupta, S.; Vanneste, B.; et al. Systematic review and meta-analysis of prediction models used in cervical cancer. Artif. Intell. Med. 2023, 139, 102549. [Google Scholar] [CrossRef]
  6. Monk, B.J.; Enomoto, T.; Kast, W.M.; McCormack, M.; Tan, D.S.P.; Wu, X.; González-Martín, A. Integration of immunotherapy into treatment of cervical cancer: Recent data and ongoing trials. Cancer Treat. Rev. 2022, 106, 102385. [Google Scholar] [CrossRef]
  7. Gaur, K.; Jagtap, M.M. Role of Artificial Intelligence and Machine Learning in Prediction, Diagnosis, and Prognosis of Cancer. Cureus 2022, 14, e31008. [Google Scholar] [CrossRef]
  8. Lam, T.Y.T.; Cheung, M.F.K.; Munro, Y.L.; Lim, K.M.; Shung, D.; Sung, J.J.Y. Randomized Controlled Trials of Artificial Intelligence in Clinical Practice: Systematic Review. J. Med. Internet Res. 2022, 24, e37188. [Google Scholar] [CrossRef]
  9. Arjmand, B.; Hamidpour, S.K.; Tayanloo-Beik, A.; Goodarzi, P.; Aghayan, H.R.; Adibi, H.; Larijani, B. Machine Learning: A New Prospect in Multi-Omics Data Analysis of Cancer. Front. Genet. 2022, 13, 824451. [Google Scholar] [CrossRef] [PubMed]
  10. Gonzalez, R.; Nejat, P.; Saha, A.; Campbell, C.J.V.; Norgan, A.P.; Lokker, C. Performance of externally validated machine learning models based on histopathology images for the diagnosis, classification, prognosis, or treatment outcome prediction in female breast cancer: A systematic review. J. Pathol. Inform. 2024, 15, 100348. [Google Scholar] [CrossRef]
  11. Li, G.X.; Chen, Y.P.; Hu, Y.Y.; Zhao, W.J.; Lu, Y.Y.; Wan, F.J.; Wu, Z.J.; Wang, X.Q.; Yu, Q.Y. Machine learning for identifying tumor stemness genes and developing prognostic model in gastric cancer. Aging 2024, 16, 6455–6477. [Google Scholar] [CrossRef]
  12. Khalighi, S.; Reddy, K.; Midya, A.; Pandav, K.B.; Madabhushi, A.; Abedalthagafi, M. Artificial intelligence in neuro-oncology: Advances and challenges in brain tumor diagnosis, prognosis, and precision treatment. NPJ Precis. Oncol. 2024, 8, 80. [Google Scholar] [CrossRef]
  13. Fiste, O.; Gkiozos, I.; Charpidou, A.; Syrigos, N.K. Artificial Intelligence-Based Treatment Decisions: A New Era for NSCLC. Cancers 2024, 16, 831. [Google Scholar] [CrossRef] [PubMed]
  14. Huan, Q.; Cheng, S.; Ma, H.F.; Zhao, M.; Chen, Y.; Yuan, X. Machine learning-derived identification of prognostic signature for improving prognosis and drug response in patients with ovarian cancer. J. Cell. Mol. Med. 2024, 28, e18021. [Google Scholar] [CrossRef] [PubMed]
  15. Ng, W.T.; But, B.; Choi, H.C.W.; de Bree, R.; Lee, A.W.M.; Lee, V.H.F.; López, F.; Mäkitie, A.A.; Rodrigo, J.P.; Saba, N.F.; et al. Application of Artificial Intelligence for Nasopharyngeal Carcinoma Management – A Systematic Review. Cancer Manag. Res. 2022, 14, 339–366. [Google Scholar] [CrossRef]
  16. Mitchell, T. Machine Learning, 2nd ed.; McGraw Hill: Shelter Island, NY, USA, 1997. [Google Scholar]
  17. Chollet, F. Deep Learning with Python, 2nd ed.; Manning Publications: Shelter Island, NY, USA, 2021. [Google Scholar]
  18. Aurelien, G. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems|Guide Books|ACM Digital Library, 2nd ed.; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2019. [Google Scholar]
  19. Painuli, D.; Mishra, D.; Bhardwaj, S.; Aggarwal, M. Forecast and prediction of COVID-19 using machine learning. In Data Science for COVID-19; Academic Press: Cambridge, MA, USA, 2021. [Google Scholar] [CrossRef]
  20. Murphy, K.P. Probabilistic Machine Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
  21. Imagawa, K.; Shiomoto, K. Performance change with the number of training data: A case study on the binary classification of COVID-19 chest X-ray by using convolutional neural networks. Comput. Biol. Med. 2022, 142, 105251. [Google Scholar] [CrossRef]
  22. Shanthi, P.B.; Faruqi, F.; Hareesha, K.S.; Kudva, R. Deep Convolution Neural Network for Malignancy Detection and Classification in Microscopic Uterine Cervix Cell Images. Asian Pac. J. Cancer Prev. 2019, 20, 3447–3456. [Google Scholar] [CrossRef]
  23. Reijtenbagh, D.; Godart, J.; de Leeuw, A.; Seppenwoolde, Y.; Jürgenliemk-Schulz, I.; Mens, J.W.; Nout, R.; Hoogeman, M. Multi-center analysis of machine-learning predicted dose parameters in brachytherapy for cervical cancer. Radiother. Oncol. 2022, 170, 169–175. [Google Scholar] [CrossRef]
  24. Gençtav, A.; Aksoy, S.; Önder, S. Unsupervised segmentation and classification of cervical cell images. Pattern Recognit. 2012, 45, 4151–4168. [Google Scholar] [CrossRef]
  25. Zhang, H.; Chen, C.; Gao, R.; Yan, Z.; Zhu, Z.; Yang, B.; Chen, C.; Lv, X.; Li, H.; Huang, Z. Rapid identification of cervical adenocarcinoma and cervical squamous cell carcinoma tissue based on Raman spectroscopy combined with multiple machine learning algorithms. Photodiagnosis Photodyn. Ther. 2021, 33, 102104. [Google Scholar] [CrossRef] [PubMed]
  26. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  27. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.U.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30, pp. 6000–6010. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  29. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Briefings Bioinform. 2017, 19, 1236–1246. [Google Scholar] [CrossRef]
  30. Adlung, L.; Cohen, Y.; Mor, U.; Elinav, E. Machine learning in clinical decision making. Med 2021, 2, 642–665. [Google Scholar] [CrossRef] [PubMed]
  31. Sambyal, D.; Sarwar, A. Recent developments in cervical cancer diagnosis using deep learning on whole slide images: An Overview of models, techniques, challenges and future directions. Micron 2023, 173, 103520. [Google Scholar] [CrossRef]
  32. Yoganathan, S.A.; Paul, S.N.; Paloor, S.; Torfeh, T.; Chandramouli, S.H.; Hammoud, R.; Al-Hammadi, N. Automatic segmentation of magnetic resonance images for high-dose-rate cervical cancer brachytherapy using deep learning. Med. Phys. 2022, 49, 1571–1584. [Google Scholar] [CrossRef]
  33. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed]
  34. Vandenbroucke, J.; Von Elm, E.; Altman, D.; Gøtzsche, P.; Mulrow, C.; Pocock, S.; Poole, C.; Schlesselman, J.; Egger, M. Mejorar la comunicación de estudios observacionales en epidemiología (STROBE): Explicación y elaboración. Gac. Sanit. 2009, 23, 158e1–158e28. [Google Scholar] [CrossRef] [PubMed]
  35. Dixon-Woods, M.; Agarwal, S.; Jones, D.; Young, B.; Sutton, A. Synthesising qualitative and quantitative evidence: A review of possible methods. J. Health Serv. Res. Policy 2005, 10, 45–53. [Google Scholar] [CrossRef]
  36. Kim, Y.J.; Ju, W.; Nam, K.H.; Kim, S.N.; Kim, Y.J.; Kim, K.G. RGB Channel Superposition Algorithm with Acetowhite Mask Images in a Cervical Cancer Classification Deep Learning Model. Sensors 2022, 22, 3564. [Google Scholar] [CrossRef]
  37. Yuan, Z.; Wang, Y.; Hu, P.; Zhang, D.; Yan, B.; Lu, H.M.; Zhang, H.; Yang, Y. Accelerate treatment planning process using deep learning generated fluence maps for cervical cancer radiation therapy. Med. Phys. 2022, 49, 2631–2641. [Google Scholar] [CrossRef]
  38. Kruczkowski, M.; Drabik-Kruczkowska, A.; Marciniak, A.; Tarczewska, M.; Kosowska, M.; Szczerska, M. Predictions of cervical cancer identification by photonic method combined with machine learning. Sci. Rep. 2022, 12, 3762. [Google Scholar] [CrossRef]
  39. Fu, L.; Xia, W.; Shi, W.; Cao, G.X.; Ruan, Y.T.; Zhao, X.Y.; Liu, M.; Niu, S.M.; Li, F.; Gao, X. Deep learning based cervical screening by the cross-modal integration of colposcopy, cytology, and HPV test. Int. J. Med. Inform. 2022, 159, 104675. [Google Scholar] [CrossRef]
  40. Ma, C.Y.; Zhou, J.Y.; Xu, X.T.; Guo, J.; Han, M.F.; Gao, Y.Z.; Du, H.; Stahl, J.N.; Maltz, J.S. Deep learning-based auto-segmentation of clinical target volumes for radiotherapy treatment of cervical cancer. J. Appl. Clin. Med. Phys. 2022, 23, e13470. [Google Scholar] [CrossRef] [PubMed]
  41. Liu, W.; Li, C.; Rahaman, M.M.; Jiang, T.; Sun, H.; Wu, X.; Hu, W.; Chen, H.; Sun, C.; Yao, Y.; et al. Is the aspect ratio of cells important in deep learning? A robust comparison of deep learning methods for multi-scale cytopathology cell image classification: From convolutional neural networks to visual transformers. Comput. Biol. Med. 2022, 141, 105026. [Google Scholar] [CrossRef]
  42. Nambu, Y.; Mariya, T.; Shinkai, S.; Umemoto, M.; Asanuma, H.; Sato, I.; Hirohashi, Y.; Torigoe, T.; Fujino, Y.; Saito, T. A screening assistance system for cervical cytology of squamous cell atypia based on a two-step combined CNN algorithm with label smoothing. Cancer Med. 2022, 11, 520–529. [Google Scholar] [CrossRef] [PubMed]
  43. Drokow, E.K.; Baffour, A.A.; Effah, C.Y.; Agboyibor, C.; Akpabla, G.S.; Sun, K. Building a predictive model to assist in the diagnosis of cervical cancer. Future Oncol. 2022, 18, 67–84. [Google Scholar] [CrossRef] [PubMed]
  44. Liu, J.; Liang, T.; Peng, Y.; Peng, G.; Sun, L.; Li, L.; Dong, H. Segmentation of acetowhite region in uterine cervical image based on deep learning. Technol. Health Care Off. J. Eur. Soc. Eng. Med. 2022, 30, 469–482. [Google Scholar] [CrossRef]
  45. Mehmood, M.; Rizwan, M.; Gregus Ml, M.; Abbas, S. Machine Learning Assisted Cervical Cancer Detection. Front. Public Health 2021, 9, 788376. [Google Scholar] [CrossRef]
  46. Ali, M.M.; Ahmed, K.; Bui, F.M.; Paul, B.K.; Ibrahim, S.M.; Quinn, J.M.W.; Moni, M.A. Machine learning-based statistical analysis for early stage detection of cervical cancer. Comput. Biol. Med. 2021, 139, 104985. [Google Scholar] [CrossRef]
  47. Chu, R.; Zhang, Y.; Qiao, X.; Xie, L.; Chen, W.; Zhao, Y.; Xu, Y.; Yuan, Z.; Liu, X.; Yin, A.; et al. Risk Stratification of Early-Stage Cervical Cancer with Intermediate-Risk Factors: Model Development and Validation Based on Machine Learning Algorithm. Oncologist 2021, 26, e2217–e2226. [Google Scholar] [CrossRef]
  48. Dong, Y.; Wan, J.; Wang, X.; Xue, J.H.; Zou, J.; He, H.; Li, P.; Hou, A.; Ma, H. A Polarization-Imaging- Based Machine Learning Framework for Quantitative Pathological Diagnosis of Cervical Precancerous Lesions. IEEE Trans. Med. Imaging 2021, 40, 3728–3738. [Google Scholar] [CrossRef]
  49. Fick, R.H.J.; Tayart, B.; Bertrand, C.; Lang, S.C.; Rey, T.; Ciompi, F.; Tilmant, C.; Farre, I.; Hadj, S.B. A Partial Label-Based Machine Learning Approach For Cervical Whole-Slide Image Classification: The Winning TissueNet Solution. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Mexico, Mexico, 1–5 November 2021; Volume 2021, pp. 2127–2131. [Google Scholar] [CrossRef]
  50. Chen, J.; Zhang, B. Segmentation of Overlapping Cervical Cells with Mask Region Convolutional Neural Network. Comput. Math. Methods Med. 2021, 2021, 3890988. [Google Scholar] [CrossRef]
  51. Cheng, S.; Liu, S.; Yu, J.; Rao, G.; Xiao, Y.; Han, W.; Zhu, W.; Lv, X.; Li, N.; Cai, J.; et al. Robust whole slide image analysis for cervical cancer screening using deep learning. Nat. Commun. 2021, 12, 5639. [Google Scholar] [CrossRef] [PubMed]
  52. Rahaman, M.M.; Li, C.; Yao, Y.; Kulwa, F.; Wu, X.; Li, X.; Wang, Q. DeepCervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput. Biol. Med. 2021, 136, 104649. [Google Scholar] [CrossRef] [PubMed]
  53. Park, Y.R.; Kim, Y.J.; Ju, W.; Nam, K.; Kim, S.; Kim, K.G. Comparison of machine and deep learning for the classification of cervical cancer based on cervicography images. Sci. Rep. 2021, 11, 16143. [Google Scholar] [CrossRef]
  54. Tian, R.; Zhou, P.; Li, M.; Tan, J.; Cui, Z.; Xu, W.; Wei, J.; Zhu, J.; Jin, Z.; Cao, C.; et al. DeepHPV: A deep learning model to predict human papillomavirus integration sites. Briefings Bioinform. 2021, 22, bbaa242. [Google Scholar] [CrossRef]
  55. Da-ano, R.; Lucia, F.; Masson, I.; Abgral, R.; Alfieri, J.; Rousseau, C.; Mervoyer, A.; Reinhold, C.; Pradier, O.; Schick, U.; et al. A transfer learning approach to facilitate ComBat-based harmonization of multicentre radiomic features in new datasets. PLoS ONE 2021, 16, e0253653. [Google Scholar] [CrossRef] [PubMed]
  56. Kaushik, M.; Chandra Joshi, R.; Kushwah, A.S.; Gupta, M.K.; Banerjee, M.; Burget, R.; Dutta, M.K. Cytokine gene variants and socio-demographic characteristics as predictors of cervical cancer: A machine learning approach. Comput. Biol. Med. 2021, 134, 104559. [Google Scholar] [CrossRef]
  57. Shi, J.; Ding, X.; Liu, X.; Li, Y.; Liang, W.; Wu, J. Automatic clinical target volume delineation for cervical cancer in CT images using deep learning. Med. Phys. 2021, 48, 3968–3981. [Google Scholar] [CrossRef]
  58. Ding, D.; Lang, T.; Zou, D.; Tan, J.; Chen, J.; Zhou, L.; Wang, D.; Li, R.; Li, Y.; Liu, J.; et al. Machine learning-based prediction of survival prognosis in cervical cancer. BMC Bioinform. 2021, 22, 331. [Google Scholar] [CrossRef]
  59. Zhu, X.; Li, X.; Ong, K.; Zhang, W.; Li, W.; Li, L.; Young, D.; Su, Y.; Shang, B.; Peng, L.; et al. Hybrid AI-assistive diagnostic model permits rapid TBS classification of cervical liquid-based thin-layer cell smears. Nat. Commun. 2021, 12, 3541. [Google Scholar] [CrossRef]
  60. Chandran, V.; Sumithra, M.G.; Karthick, A.; George, T.; Deivakani, M.; Elakkiya, B.; Subramaniam, U.; Manoharan, S. Diagnosis of Cervical Cancer based on Ensemble Deep Learning Network using Colposcopy Images. BioMed Res. Int. 2021, 2021, 5584004. [Google Scholar] [CrossRef]
  61. Jiang, X.; Li, J.; Kan, Y.; Yu, T.; Chang, S.; Sha, X.; Zheng, H.; Luo, Y.; Wang, S. MRI Based Radiomics Approach With Deep Learning for Prediction of Vessel Invasion in Early-Stage Cervical Cancer. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 995–1002. [Google Scholar] [CrossRef] [PubMed]
  62. Christley, S.; Ostmeyer, J.; Quirk, L.; Zhang, W.; Sirak, B.; Giuliano, A.R.; Zhang, S.; Monson, N.; Tiro, J.; Lucas, E.; et al. T Cell Receptor Repertoires Acquired via Routine Pap Testing May Help Refine Cervical Cancer and Precancer Risk Estimates. Front. Immunol. 2021, 12, 624230. [Google Scholar] [CrossRef] [PubMed]
  63. Ke, J.; Shen, Y.; Lu, Y.; Deng, J.; Wright, J.D.; Zhang, Y.; Huang, Q.; Wang, D.; Jing, N.; Liang, X.; et al. Quantitative analysis of abnormalities in gynecologic cytopathology with deep learning. Lab. Investig. J. Tech. Methods Pathol. 2021, 101, 513–524. [Google Scholar] [CrossRef]
  64. Rigaud, B.; Anderson, B.M.; Yu, Z.H.; Gobeli, M.; Cazoulat, G.; Söderberg, J.; Samuelsson, E.; Lidberg, D.; Ward, C.; Taku, N.; et al. Automatic Segmentation Using Deep Learning to Enable Online Dose Optimization During Adaptive Radiation Therapy of Cervical Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2021, 109, 1096–1110. [Google Scholar] [CrossRef]
  65. Urushibara, A.; Saida, T.; Mori, K.; Ishiguro, T.; Sakai, M.; Masuoka, S.; Satoh, T.; Masumoto, T. Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists. Eur. J. Radiol. 2021, 135, 109471. [Google Scholar] [CrossRef]
  66. Wentzensen, N.; Lahrmann, B.; Clarke, M.A.; Kinney, W.; Tokugawa, D.; Poitras, N.; Locke, A.; Bartels, L.; Krauthoff, A.; Walker, J.; et al. Accuracy and Efficiency of Deep-Learning-Based Automation of Dual Stain Cytology in Cervical Cancer Screening. J. Natl. Cancer Inst. 2021, 113, 72–79. [Google Scholar] [CrossRef] [PubMed]
  67. Wang, Z.; Chang, Y.; Peng, Z.; Lv, Y.; Shi, W.; Wang, F.; Pei, X.; Xu, X.G. Evaluation of deep learning-based auto-segmentation algorithms for delineating clinical target volume and organs at risk involving data for 125 cervical cancer patients. J. Appl. Clin. Med. Phys. 2020, 21, 272–279. [Google Scholar] [CrossRef]
  68. Liu, Z.; Liu, X.; Guan, H.; Zhen, H.; Sun, Y.; Chen, Q.; Chen, Y.; Wang, S.; Qiu, J. Development and validation of a deep learning algorithm for auto-delineation of clinical target volume and organs at risk in cervical cancer radiotherapy. Radiother. Oncol. J. Eur. Soc. Ther. Radiol. Oncol. 2020, 153, 172–179. [Google Scholar] [CrossRef]
  69. Mao, X.; Pineau, J.; Keyes, R.; Enger, S.A. RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy Via Deep Learning. Int. J. Radiat. Oncol. Biol. Phys. 2020, 108, 802–812. [Google Scholar] [CrossRef]
  70. Xue, Z.; Novetsky, A.P.; Einstein, M.H.; Marcus, J.Z.; Befano, B.; Guo, P.; Demarco, M.; Wentzensen, N.; Long, L.R.; Schiffman, M.; et al. A demonstration of automated visual evaluation of cervical images taken with a smartphone camera. Int. J. Cancer 2020, 147, 2416–2423. [Google Scholar] [CrossRef]
  71. Bao, H.; Bi, H.; Zhang, X.; Zhao, Y.; Dong, Y.; Luo, X.; Zhou, D.; You, Z.; Wu, Y.; Liu, Z.; et al. Artificial intelligence-assisted cytology for detection of cervical intraepithelial neoplasia or invasive cancer: A multicenter, clinical-based, observational study. Gynecol. Oncol. 2020, 159, 171–178. [Google Scholar] [CrossRef] [PubMed]
  72. Cho, B.J.; Choi, Y.J.; Lee, M.J.; Kim, J.H.; Son, G.H.; Park, S.H.; Kim, H.B.; Joo, Y.J.; Cho, H.Y.; Kyung, M.S.; et al. Classification of cervical neoplasms on colposcopic photography using deep learning. Sci. Rep. 2020, 10, 13652. [Google Scholar] [CrossRef]
  73. Ju, Z.; Wu, Q.; Yang, W.; Gu, S.; Guo, W.; Wang, J.; Ge, R.; Quan, H.; Liu, J.; Qu, B. Automatic segmentation of pelvic organs-at-risk using a fusion network model based on limited training samples. Acta Oncol. 2020, 59, 933–939. [Google Scholar] [CrossRef] [PubMed]
  74. Kanai, R.; Ohshima, K.; Ishii, K.; Sonohara, M.; Ishikawa, M.; Yamaguchi, M.; Ohtani, Y.; Kobayashi, Y.; Ota, H.; Kimura, F. Discriminant analysis and interpretation of nuclear chromatin distribution and coarseness using gray-level co-occurrence matrix features for lobular endocervical glandular hyperplasia. Diagn. Cytopathol. 2020, 48, 724–735. [Google Scholar] [CrossRef] [PubMed]
  75. Yuan, C.; Yao, Y.; Cheng, B.; Cheng, Y.; Li, Y.; Li, Y.; Liu, X.; Cheng, X.; Xie, X.; Wu, J.; et al. The application of deep learning based diagnostic system to cervical squamous intraepithelial lesions recognition in colposcopy images. Sci. Rep. 2020, 10, 11639. [Google Scholar] [CrossRef]
  76. Hu, L.; Horning, M.P.; Banik, D.; Ajenifuja, O.K.; Adepiti, C.A.; Yeates, K.; Mtema, Z.; Wilson, B.; Mehanian, C. Deep learning-based image evaluation for cervical precancer screening with a smartphone targeting low resource settings—Engineering approach. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; Volume 2020, pp. 1944–1949. [Google Scholar] [CrossRef]
  77. Wu, Q.; Wang, S.; Zhang, S.; Wang, M.; Ding, Y.; Fang, J.; Wu, Q.; Qian, W.; Liu, Z.; Sun, K.; et al. Development of a Deep Learning Model to Identify Lymph Node Metastasis on Magnetic Resonance Imaging in Patients With Cervical Cancer. JAMA Netw. Open 2020, 3, e2011625. [Google Scholar] [CrossRef]
  78. Wang, T.; Gao, T.; Guo, H.; Wang, Y.; Zhou, X.; Tian, J.; Huang, L.; Zhang, M. Preoperative prediction of parametrial invasion in early-stage cervical cancer with MRI-based radiomics nomogram. Eur. Radiol. 2020, 30, 3585–3593. [Google Scholar] [CrossRef]
  79. Kudva, V.; Prasad, K.; Guruvare, S. Hybrid Transfer Learning for Classification of Uterine Cervix Images for Cervical Cancer Screening. J. Digit. Imaging 2020, 33, 619–631. [Google Scholar] [CrossRef]
  80. Ijaz, M.F.; Attique, M.; Son, Y. Data-Driven Cervical Cancer Prediction Model with Outlier Detection and Over-Sampling Methods. Sensors 2020, 20, 2809. [Google Scholar] [CrossRef]
  81. Bae, J.K.; Roh, H.J.; You, J.S.; Kim, K.; Ahn, Y.; Askaruly, S.; Park, K.; Yang, H.; Jang, G.J.; Moon, K.H.; et al. Quantitative Screening of Cervical Cancers for Low-Resource Settings: Pilot Study of Smartphone-Based Endoscopic Visual Inspection After Acetic Acid Using Machine Learning Techniques. JMIR mHealth uHealth 2020, 8, e16467. [Google Scholar] [CrossRef]
  82. Sornapudi, S.; Brown, G.T.; Xue, Z.; Long, R.; Allen, L.; Antani, S. Comparing Deep Learning Models for Multi-cell Classification in Liquid- based Cervical Cytology Image. AMIA Annu. Symp. Proc. 2020, 2019, 820–827. [Google Scholar] [PubMed]
  83. Shao, J.; Zhang, Z.; Liu, H.; Song, Y.; Yan, Z.; Wang, X.; Hou, Z. DCE-MRI pharmacokinetic parameter maps for cervical carcinoma prediction. Comput. Biol. Med. 2020, 118, 103634. [Google Scholar] [CrossRef] [PubMed]
  84. Takada, A.; Yokota, H.; Watanabe Nemoto, M.; Horikoshi, T.; Matsushima, J.; Uno, T. A multi-scanner study of MRI radiomics in uterine cervical cancer: Prediction of in-field tumor control after definitive radiotherapy based on a machine learning method including peritumoral regions. Jpn. J. Radiol. 2020, 38, 265–273. [Google Scholar] [CrossRef] [PubMed]
  85. Lin, Y.C.; Lin, C.H.; Lu, H.Y.; Chiang, H.J.; Wang, H.K.; Huang, Y.T.; Ng, S.H.; Hong, J.H.; Yen, T.C.; Lai, C.H.; et al. Deep learning for fully automated tumor segmentation and extraction of magnetic resonance radiomics features in cervical cancer. Eur. Radiol. 2020, 30, 1297–1305. [Google Scholar] [CrossRef]
  86. Jihong, C.; Penggang, B.; Xiuchun, Z.; Kaiqiang, C.; Wenjuan, C.; Yitao, D.; Jiewei, Q.; Kerun, Q.; Jing, Z.; Tianming, W. Automated Intensity Modulated Radiation Therapy Treatment Planning for Cervical Cancer Based on Convolution Neural Network. Technol. Cancer Res. Treat. 2020, 19, 1533033820957002. [Google Scholar] [CrossRef]
  87. Liu, Z.; Liu, X.; Xiao, B.; Wang, S.; Miao, Z.; Sun, Y.; Zhang, F. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Phys. Medica 2020, 69, 184–191. [Google Scholar] [CrossRef]
  88. Shen, W.C.; Chen, S.W.; Wu, K.C.; Hsieh, T.C.; Liang, J.A.; Hung, Y.C.; Yeh, L.S.; Chang, W.C.; Lin, W.C.; Yen, K.Y.; et al. Prediction of local relapse and distant metastasis in patients with definitive chemoradiotherapy-treated cervical cancer by deep learning from [18F]-fluorodeoxyglucose positron emission tomography/computed tomography. Eur. Radiol. 2019, 29, 6741–6749. [Google Scholar] [CrossRef]
  89. Pathania, D.; Landeros, C.; Rohrer, L.; D’Agostino, V.; Hong, S.; Degani, I.; Avila-Wallace, M.; Pivovarov, M.; Randall, T.; Weissleder, R.; et al. Point-of-care cervical cancer screening using deep learning-based microholography. Theranostics 2019, 9, 8438–8447. [Google Scholar] [CrossRef]
  90. Aljakouch, K.; Hilal, Z.; Daho, I.; Schuler, M.; Krauß, S.D.; Yosef, H.K.; Dierks, J.; Mosig, A.; Gerwert, K.; El-Mashtoly, S.F. Fast and Noninvasive Diagnosis of Cervical Cancer by Coherent Anti-Stokes Raman Scattering. Anal. Chem. 2019, 91, 13900–13906. [Google Scholar] [CrossRef]
  91. Dong, N.; Zhao, L.; Wu, A. Cervical cell recognition based on AGVF-Snake algorithm. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 2031–2041. [Google Scholar] [CrossRef]
  92. Hu, L.; Bell, D.; Antani, S.; Xue, Z.; Yu, K.; Horning, M.P.; Gachuhi, N.; Wilson, B.; Jaiswal, M.S.; Befano, B.; et al. An Observational Study of Deep Learning and Automated Evaluation of Cervical Images for Cancer Screening. J. Natl. Cancer Inst. 2019, 111, 923–932. [Google Scholar] [CrossRef] [PubMed]
  93. Dong, X.; Du, H.; Guan, H.; Zhang, X. Multiscale Time-Sharing Elastography Algorithms and Transfer Learning of Clinicopathological Features of Uterine Cervical Cancer for Medical Intelligent Computing System. J. Med. Syst. 2019, 43, 310. [Google Scholar] [CrossRef] [PubMed]
  94. Geetha, R.; Sivasubramanian, S.; Kaliappan, M.; Vimal, S.; Annamalai, S. Cervical Cancer Identification with Synthetic Minority Oversampling Technique and PCA Analysis using Random Forest Classifier. J. Med. Syst. 2019, 43, 286. [Google Scholar] [CrossRef] [PubMed]
  95. Shen, C.; Gonzalez, Y.; Klages, P.; Qin, N.; Jung, H.; Chen, L.; Nguyen, D.; Jiang, S.B.; Jia, X. Intelligent inverse treatment planning via deep reinforcement learning, a proof-of-principle study in high dose-rate brachytherapy for cervical cancer. Phys. Med. Biol. 2019, 64, 115013. [Google Scholar] [CrossRef]
  96. Zhang, J.; Liu, Z.; Du, B.; He, J.; Li, G.; Chen, D. Binary tree-like network with two-path Fusion Attention Feature for cervical cell nucleus segmentation. Comput. Biol. Med. 2019, 108, 223–233. [Google Scholar] [CrossRef]
  97. Chen, L.; Shen, C.; Zhou, Z.; Maquilan, G.; Albuquerque, K.; Folkert, M.R.; Wang, J. Automatic PET Cervical Tumor Segmentation by Combining Deep Learning and Anatomic Prior. Phys. Med. Biol. 2019, 64, 085019. [Google Scholar] [CrossRef]
  98. Matsuo, K.; Purushotham, S.; Jiang, B.; Mandelbaum, R.S.; Takiuchi, T.; Liu, Y.; Roman, L.D. Survival outcome prediction in cervical cancer: Cox models vs deep-learning model. Am. J. Obstet. Gynecol. 2019, 220, 381.e1–381.e14. [Google Scholar] [CrossRef]
  99. Araújo, F.H.D.; Silva, R.R.V.; Ushizima, D.M.; Rezende, M.T.; Carneiro, C.M.; Campos Bianchi, A.G.; Medeiros, F.N.S. Deep learning for cell image segmentation and ranking. Comput. Med. Imaging Graph. Off. J. Comput. Med. Imaging Soc. 2019, 72, 13–21. [Google Scholar] [CrossRef]
  100. Kan, Y.; Dong, D.; Zhang, Y.; Jiang, W.; Zhao, N.; Han, L.; Fang, M.; Zang, Y.; Hu, C.; Tian, J.; et al. Radiomic signature as a predictive factor for lymph node metastasis in early-stage cervical cancer. J. Magn. Reson. Imaging JMRI 2019, 49, 304–310. [Google Scholar] [CrossRef]
  101. Zhen, X.; Chen, J.; Zhong, Z.; Hrycushko, B.; Zhou, L.; Jiang, S.; Albuquerque, K.; Gu, X. Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: A feasibility study. Phys. Med. Biol. 2017, 62, 8246–8263. [Google Scholar] [CrossRef]
  102. Xie, Y.; Xing, F.; Shi, X.; Kong, X.; Su, H.; Yang, L. Efficient and robust cell detection: A structured regression approach. Med. Image Anal. 2018, 44, 245–254. [Google Scholar] [CrossRef] [PubMed]
  103. Kudva, V.; Prasad, K.; Guruvare, S. Automation of Detection of Cervical Cancer Using Convolutional Neural Networks. Crit. Rev. Biomed. Eng. 2018, 46, 135–145. [Google Scholar] [CrossRef]
  104. Wei, L.; Gan, Q.; Ji, T. Cervical cancer histology image identification method based on texture and lesion area features. Comput. Assist. Surg. 2017, 22, 186–199. [Google Scholar] [CrossRef]
  105. Guo, P.; Banerjee, K.; Joe Stanley, R.; Long, R.; Antani, S.; Thoma, G.; Zuna, R.; Frazier, S.R.; Moss, R.H.; Stoecker, W.V. Nuclei-Based Features for Uterine Cervical Cancer Histology Image Analysis With Fusion-Based Classification. IEEE J. Biomed. Health Inform. 2016, 20, 1595–1607. [Google Scholar] [CrossRef] [PubMed]
  106. Zhao, M.; Wu, A.; Song, J.; Sun, X.; Dong, N. Automatic screening of cervical cells using block image processing. BioMed. Eng. Online 2016, 15, 14. [Google Scholar] [CrossRef]
  107. Weegar, R.; Kvist, M.; Sundström, K.; Brunak, S.; Dalianis, H. Finding Cervical Cancer Symptoms in Swedish Clinical Text using a Machine Learning Approach and NegEx. AMIA Annu. Symp. Proc. 2015, 2015, 1296–1305. [Google Scholar] [PubMed]
  108. Kahng, J.; Kim, E.H.; Kim, H.G.; Lee, W. Development of a cervical cancer progress prediction tool for human papillomavirus-positive Koreans: A support vector machine-based approach. J. Int. Med. Res. 2015, 43, 518–525. [Google Scholar] [CrossRef]
  109. Mu, W.; Chen, Z.; Liang, Y.; Shen, W.; Yang, F.; Dai, R.; Wu, N.; Tian, J. Staging of cervical cancer based on tumor heterogeneity characterized by texture features on 18F-FDG PET images. Phys. Med. Biol. 2015, 60, 5123. [Google Scholar] [CrossRef]
  110. Zhi, L.; Carneiro, G.; Bradley, A.P. An improved joint optimization of multiple level set functions for the segmentation of overlapping cervical cells. IEEE Trans. Image Process. 2015, 24, 1261–1272. [Google Scholar] [CrossRef]
  111. Dong, Y.; Kuang, Q.; Dai, X.; Li, R.; Wu, Y.; Leng, W.; Li, Y.; Li, M. Improving the Understanding of Pathogenesis of Human Papillomavirus 16 via Mapping Protein-Protein Interaction Network. BioMed Res. Int. 2015, 2015, 890381. [Google Scholar] [CrossRef]
  112. Mariarputham, E.J.; Stephen, A. Nominated Texture Based Cervical Cancer Classification. Comput. Math. Methods Med. 2015, 2015, 586928. [Google Scholar] [CrossRef] [PubMed]
  113. Xin, Z.; Yan, W.; Feng, Y.; Yunzhi, L.; Zhang, Y.; Wang, D.; Chen, W.; Peng, J.; Guo, C.; Chen, Z.; et al. An MRI-based machine learning radiomics can predict short-term response to neoadjuvant chemotherapy in patients with cervical squamous cell carcinoma: A multicenter study. Cancer Med. 2023, 12, 19383–19393. [Google Scholar] [CrossRef] [PubMed]
  114. Robinson, D.; Hoong, K.; Kleijn, W.B.; Doronin, A.; Rehbinder, J.; Vizet, J.; Pierangelo, A.; Novikova, T. Polarimetric imaging for cervical pre-cancer screening aided by machine learning: Ex vivo studies. J. Biomed. Opt. 2023, 28, 102904. [Google Scholar] [CrossRef]
  115. Tian, M.; Wang, H.; Liu, X.; Ye, Y.; Ouyang, G.; Shen, Y.; Li, Z.; Wang, X.; Wu, S. Delineation of clinical target volume and organs at risk in cervical cancer radiotherapy by deep learning networks. Med. Phys. 2023, 50, 6354–6365. [Google Scholar] [CrossRef]
  116. Kaur, M.; Singh, D.; Kumar, V.; Lee, H.N. MLNet: Metaheuristics-Based Lightweight Deep Learning Network for Cervical Cancer Diagnosis. IEEE J. Biomed. Health Inform. 2023, 27, 5004–5014. [Google Scholar] [CrossRef]
  117. Ji, J.; Zhang, W.; Dong, Y.; Lin, R.; Geng, Y.; Hong, L. Automated cervical cell segmentation using deep ensemble learning. BMC Med. Imaging 2023, 23, 137. [Google Scholar] [CrossRef]
  118. Liu, Q.; Jiang, N.; Hao, Y.; Hao, C.; Wang, W.; Bian, T.; Wang, X.; Li, H.; Zhang, Y.; Kang, Y.; et al. Identification of lymph node metastasis in pre-operation cervical cancer patients by weakly supervised deep learning from histopathological whole-slide biopsy images. Cancer Med. 2023, 12, 17952–17966. [Google Scholar] [CrossRef] [PubMed]
  119. Kang, Z.; Liu, J.; Ma, C.; Chen, C.; Lv, X.; Chen, C. Early screening of cervical cancer based on tissue Raman spectroscopy combined with deep learning algorithms. Photodiagnosis Photodyn. Ther. 2023, 42, 103557. [Google Scholar] [CrossRef]
  120. Wu, G.; Li, C.; Yin, L.; Wang, J.; Zheng, X. Compared between support vector machine (SVM) and deep belief network (DBN) for multi-classification of Raman spectroscopy for cervical diseases. Photodiagnosis Photodyn. Ther. 2023, 42, 103340. [Google Scholar] [CrossRef]
  121. Ince, O.; Uysal, E.; Durak, G.; Onol, S.; Donmez Yilmaz, B.; Erturk, S.M.; Onder, H. Prediction of carcinogenic human papillomavirus types in cervical cancer from multiparametric magnetic resonance images with machine learning-based radiomics models. Diagn. Interv. Radiol. 2023, 29, 460–468. [Google Scholar] [CrossRef]
  122. Kurita, Y.; Meguro, S.; Tsuyama, N.; Kosugi, I.; Enomoto, Y.; Kawasaki, H.; Uemura, T.; Kimura, M.; Iwashita, T. Accurate deep learning model using semi-supervised learning and Noisy Student for cervical cancer screening in low magnification images. PLoS ONE 2023, 18, e0285996. [Google Scholar] [CrossRef] [PubMed]
  123. Zhu, J.; Yang, C.; Song, S.; Wang, R.; Gu, L.; Chen, Z. Classification of multiple cancer types by combination of plasma-based near-infrared spectroscopy analysis and machine learning modeling. Anal. Biochem. 2023, 669, 115120. [Google Scholar] [CrossRef] [PubMed]
  124. Yang, B.; Liu, Y.; Chen, Z.; Wang, Z.; Zhou, Q.; Qiu, J. Tissues margin-based analytical anisotropic algorithm boosting method via deep learning attention mechanism with cervical cancer. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 953–959. [Google Scholar] [CrossRef]
  125. Devi, S.; Gaikwad, S.R.; R, H. Prediction and Detection of Cervical Malignancy Using Machine Learning Models. Asian Pac. J. Cancer Prev. APJCP 2023, 24, 1419–1433. [Google Scholar] [CrossRef] [PubMed]
  126. Chen, X.; Pu, X.; Chen, Z.; Li, L.; Zhao, K.N.; Liu, H.; Zhu, H. Application of EfficientNet-B0 and GRU-based deep learning on classifying the colposcopy diagnosis of precancerous cervical lesions. Cancer Med. 2023, 12, 8690–8699. [Google Scholar] [CrossRef]
  127. Salehi, M.; Vafaei Sadr, A.; Mahdavi, S.R.; Arabi, H.; Shiri, I.; Reiazi, R. Deep Learning-based Non-rigid Image Registration for High-dose Rate Brachytherapy in Inter-fraction Cervical Cancer. J. Digit. Imaging 2023, 36, 574–587. [Google Scholar] [CrossRef]
  128. Yu, W.; Lu, Y.; Shou, H.; Xu, H.; Shi, L.; Geng, X.; Song, T. A 5-year survival status prognosis of nonmetastatic cervical cancer patients through machine learning algorithms. Cancer Med. 2023, 12, 6867–6876. [Google Scholar] [CrossRef]
  129. Liu, S.; Chu, R.; Xie, J.; Song, K.; Su, X. Differentiating single cervical cells by mitochondrial fluorescence imaging and deep learning-based label-free light scattering with multi-modal static cytometry. Cytom. Part A J. Int. Soc. Anal. Cytol. 2023, 103, 240–250. [Google Scholar] [CrossRef]
  130. Wang, J.; Chen, Y.; Tu, Y.; Xie, H.; Chen, Y.; Luo, L.; Zhou, P.; Tang, Q. Evaluation of auto-segmentation for brachytherapy of postoperative cervical cancer using deep learning-based workflow. Phys. Med. Biol. 2023, 68, 055012. [Google Scholar] [CrossRef]
  131. Yu, W.; Xiao, C.; Xu, J.; Jin, J.; Jin, X.; Shen, L. Direct Dose Prediction With Deep Learning for Postoperative Cervical Cancer Underwent Volumetric Modulated Arc Therapy. Technol. Cancer Res. Treat. 2023, 22, 15330338231167039. [Google Scholar] [CrossRef]
  132. Huang, M.; Feng, C.; Sun, D.; Cui, M.; Zhao, D. Segmentation of Clinical Target Volume From CT Images for Cervical Cancer Using Deep Learning. Technol. Cancer Res. Treat. 2023, 22, 15330338221139164. [Google Scholar] [CrossRef] [PubMed]
  133. Cao, Y.; Ma, H.; Fan, Y.; Liu, Y.; Zhang, H.; Cao, C.; Yu, H. A deep learning-based method for cervical transformation zone classification in colposcopy images. Technol. Health Care Off. J. Eur. Soc. Eng. Med. 2023, 31, 527–538. [Google Scholar] [CrossRef]
  134. Chen, C.; Cao, Y.; Li, W.; Liu, Z.; Liu, P.; Tian, X.; Sun, C.; Wang, W.; Gao, H.; Kang, S.; et al. The pathological risk score: A new deep learning-based signature for predicting survival in cervical cancer. Cancer Med. 2022, 12, 1051–1063. [Google Scholar] [CrossRef]
  135. He, S.; Xiao, B.; Wei, H.; Huang, S.; Chen, T. SVM classifier of cervical histopathology images based on texture and morphological features. Technol. Health Care Off. J. Eur. Soc. Eng. Med. 2023, 31, 69–80. [Google Scholar] [CrossRef]
  136. Qian, W.; Li, Z.; Chen, W.; Yin, H.; Zhang, J.; Xu, J.; Hu, C. RESOLVE-DWI-based deep learning nomogram for prediction of normal-sized lymph node metastasis in cervical cancer: A preliminary study. BMC Med. Imaging 2022, 22, 221. [Google Scholar] [CrossRef] [PubMed]
  137. Dong, T.; Wang, L.; Li, R.; Liu, Q.; Xu, Y.; Wei, Y.; Jiao, X.; Li, X.; Zhang, Y.; Zhang, Y.; et al. Development of a Novel Deep Learning-Based Prediction Model for the Prognosis of Operable Cervical Cancer. Comput. Math. Methods Med. 2022, 2022, 4364663. [Google Scholar] [CrossRef]
  138. Ji, M.; Zhong, J.; Xue, R.; Su, W.; Kong, Y.; Fei, Y.; Ma, J.; Wang, Y.; Mi, L. Early Detection of Cervical Cancer by Fluorescence Lifetime Imaging Microscopy Combined with Unsupervised Machine Learning. Int. J. Mol. Sci. 2022, 23, 11476. [Google Scholar] [CrossRef]
  139. Ming, Y.; Dong, X.; Zhao, J.; Chen, Z.; Wang, H.; Wu, N. Deep learning-based multimodal image analysis for cervical cancer detection. Methods 2022, 205, 46–52. [Google Scholar] [CrossRef] [PubMed]
  140. Wang, J.; Chen, Y.; Xie, H.; Luo, L.; Tang, Q. Evaluation of auto-segmentation for EBRT planning structures using deep learning-based workflow on cervical cancer. Sci. Rep. 2022, 12, 13650. [Google Scholar] [CrossRef]
  141. Ma, C.Y.; Zhou, J.Y.; Xu, X.T.; Qin, S.B.; Han, M.F.; Cao, X.H.; Gao, Y.Z.; Xu, L.; Zhou, J.J.; Zhang, W.; et al. Clinical evaluation of deep learning-based clinical target volume three-channel auto-segmentation algorithm for adaptive radiotherapy in cervical cancer. BMC Med. Imaging 2022, 22, 123. [Google Scholar] [CrossRef]
  142. Chang, X.; Cai, X.; Dan, Y.; Song, Y.; Lu, Q.; Yang, G.; Nie, S. Self-supervised learning for multi-center magnetic resonance imaging harmonization without traveling phantoms. Phys. Med. Biol. 2022, 67, 145004. [Google Scholar] [CrossRef] [PubMed]
  143. Tao, X.; Chu, X.; Guo, B.; Pan, Q.; Ji, S.; Lou, W.; Lv, C.; Xie, G.; Hua, K. Scrutinizing high-risk patients from ASC-US cytology via a deep learning model. Cancer Cytopathol. 2022, 130, 407–414. [Google Scholar] [CrossRef] [PubMed]
  144. Qilin, Z.; Peng, B.; Ang, Q.; Weijuan, J.; Ping, J.; Hongqing, Z.; Bin, D.; Ruijie, Y. The feasibility study on the generalization of deep learning dose prediction model for volumetric modulated arc therapy of cervical cancer. J. Appl. Clin. Med. Phys. 2022, 23, e13583. [Google Scholar] [CrossRef]
  145. Xue, Y.; Zheng, X.; Wu, G.; Wang, J. Rapid diagnosis of cervical cancer based on serum FTIR spectroscopy and support vector machines. Lasers Med. Sci. 2023, 38, 276. [Google Scholar] [CrossRef]
  146. Gay, S.S.; Kisling, K.D.; Anderson, B.M.; Zhang, L.; Rhee, D.J.; Nguyen, C.; Netherton, T.; Yang, J.; Brock, K.; Jhingran, A.; et al. Identifying the optimal deep learning architecture and parameters for automatic beam aperture definition in 3D radiotherapy. J. Appl. Clin. Med. Phys. 2023, 24, e14131. [Google Scholar] [CrossRef]
  147. Shen, Y.; Tang, X.; Lin, S.; Jin, X.; Ding, J.; Shao, M. Automatic dose prediction using deep learning and plan optimization with finite-element control for intensity modulated radiation therapy. Med. Phys. 2024, 51, 545–555. [Google Scholar] [CrossRef]
  148. Aljrees, T. Improving prediction of cervical cancer using KNN imputer and multi-model ensemble learning. PLoS ONE 2024, 19, e0295632. [Google Scholar] [CrossRef] [PubMed]
  149. Munshi, R.M. Novel ensemble learning approach with SVM-imputed ADASYN features for enhanced cervical cancer prediction. PLoS ONE 2024, 19, e0296107. [Google Scholar] [CrossRef]
  150. Jeong, S.; Yu, H.; Park, S.H.; Woo, D.; Lee, S.J.; Chong, G.O.; Han, H.S.; Kim, J.C. Comparing deep learning and handcrafted radiomics to predict chemoradiotherapy response for locally advanced cervical cancer using pretreatment MRI. Sci. Rep. 2024, 14, 1180. [Google Scholar] [CrossRef]
  151. Stegmüller, T.; Abbet, C.; Bozorgtabar, B.; Clarke, H.; Petignat, P.; Vassilakos, P.; Thiran, J.P. Self-supervised learning-based cervical cytology for the triage of HPV-positive women in resource-limited settings and low-data regime. arXiv 2023, arXiv:2302.05195. [Google Scholar] [CrossRef]
  152. Wu, K.C.; Chen, S.W.; Hsieh, T.C.; Yen, K.Y.; Chang, C.J.; Kuo, Y.C.; Chang, R.F.; Chia-Hung, K. Early prediction of distant metastasis in patients with uterine cervical cancer treated with definitive chemoradiotherapy by deep learning using pretreatment [18 F]fluorodeoxyglucose positron emission tomography/computed tomography. Nucl. Med. Commun. 2024, 45, 196–202. [Google Scholar] [CrossRef] [PubMed]
  153. Wu, Z.; Wang, D.; Xu, C.; Peng, S.; Deng, L.; Liu, M.; Wu, Y. Clinical target volume (CTV) automatic delineation using deep learning network for cervical cancer radiotherapy: A study with external validation. J. Appl. Clin. Med. Phys. 2025, 26, e14553. [Google Scholar] [CrossRef] [PubMed]
  154. Xin, W.; Rixin, S.; Linrui, L.; Zhihui, Q.; Long, L.; Yu, Z. Machine learning-based radiomics for predicting outcomes in cervical cancer patients undergoing concurrent chemoradiotherapy. Comput. Biol. Med. 2024, 177, 108593. [Google Scholar] [CrossRef] [PubMed]
  155. Chen, L.; Shen, C.; Zhou, Z.; Maquilan, G.; Thomas, K.; Folkert, M.R.; Albuquerque, K.; Wang, J. Accurate segmenting of cervical tumors in PET imaging based on similarity between adjacent slices. Comput. Biol. Med. 2018, 97, 30–36. [Google Scholar] [CrossRef]
  156. Chen, X.; Liu, W.; Thai, T.C.; Castellano, T.; Gunderson, C.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Developing a new radiomics-based CT image marker to detect lymph node metastasis among cervical cancer patients. Comput. Methods Programs Biomed. 2020, 197, 105759. [Google Scholar] [CrossRef]
  157. Zheng, C.; Shen, Q.; Zhao, L.; Wang, Y. Utilising deep learning networks to classify ZEB2 expression images in cervical cancer. Br. J. Hosp. Med. 2024, 85, 1–13. [Google Scholar] [CrossRef]
  158. Brenes, D.; Salcedo, M.P.; Coole, J.B.; Maker, Y.; Kortum, A.; Schwarz, R.A.; Carns, J.; Vohra, I.S.; Possati-Resende, J.C.; Antoniazzi, M.; et al. Multiscale Optical Imaging Fusion for Cervical Precancer Diagnosis: Integrating Widefield Colposcopy and High-Resolution Endomicroscopy. IEEE Trans. Bio-Med. Eng. 2024, 71, 2547–2556. [Google Scholar] [CrossRef]
  159. Shandilya, G.; Gupta, S.; Almogren, A.; Bharany, S.; Altameem, A.; Rehman, A.U.; Hussen, S. Enhancing advanced cervical cell categorization with cluster-based intelligent systems by a novel integrated CNN approach with skip mechanisms and GAN-based augmentation. Sci. Rep. 2024, 14, 29040. [Google Scholar] [CrossRef]
  160. Mathivanan, S.K.; Francis, D.; Srinivasan, S.; Khatavkar, V.; P, K.; Shah, M.A. Enhancing cervical cancer detection and robust classification through a fusion of deep learning models. Sci. Rep. 2024, 14, 10812. [Google Scholar] [CrossRef]
  161. Wang, C.W.; Liou, Y.A.; Lin, Y.J.; Chang, C.C.; Chu, P.H.; Lee, Y.C.; Wang, C.H.; Chao, T.K. Artificial intelligence-assisted fast screening cervical high grade squamous intraepithelial lesion and squamous cell carcinoma diagnosis and treatment planning. Sci. Rep. 2021, 11, 16244. [Google Scholar] [CrossRef]
  162. Dong, B.; Xue, H.; Li, Y.; Li, P.; Chen, J.; Zhang, T.; Chen, L.; Pan, D.; Liu, P.; Sun, P. Classification and diagnosis of cervical lesions based on colposcopy images using deep fully convolutional networks: A man-machine comparison cohort study. Fundam. Res. 2025, 5, 419–428. [Google Scholar] [CrossRef] [PubMed]
  163. Xiao, T.; Wang, C.; Yang, M.; Yang, J.; Xu, X.; Shen, L.; Yang, Z.; Xing, H.; Ou, C.Q. Use of Virus Genotypes in Machine Learning Diagnostic Prediction Models for Cervical Cancer in Women With High-Risk Human Papillomavirus Infection. JAMA Netw. Open 2023, 6, e2326890. [Google Scholar] [CrossRef] [PubMed]
  164. He, S.; Zhu, G.; Zhou, Y.; Yang, B.; Wang, J.; Wang, Z.; Wang, T. Predictive models for personalized precision medical intervention in spontaneous regression stages of cervical precancerous lesions. J. Transl. Med. 2024, 22, 686. [Google Scholar] [CrossRef] [PubMed]
  165. Wang, Y.; Wang, T.; Yan, D.; Zhao, H.; Wang, M.; Liu, T.; Fan, X.; Xu, X. Vaginal microbial profile of cervical cancer patients receiving chemoradiotherapy: The potential involvement of Lactobacillus iners in recurrence. J. Transl. Med. 2024, 22, 575. [Google Scholar] [CrossRef]
  166. Wu, Z.; Li, T.; Han, Y.; Jiang, M.; Yu, Y.; Xu, H.; Yu, L.; Cui, J.; Liu, B.; Chen, F.; et al. Development of models for cervical cancer screening: Construction in a cross-sectional population and validation in two screening cohorts in China. BMC Med. 2021, 19, 197. [Google Scholar] [CrossRef]
  167. Namalinzi, F.; Galadima, K.R.; Nalwanga, R.; Sekitoleko, I.; Uwimbabazi, L.F.R. Prediction of precancerous cervical cancer lesions among women living with HIV on antiretroviral therapy in Uganda: A comparison of supervised machine learning algorithms. BMC Women’s Health 2024, 24, 393. [Google Scholar] [CrossRef]
  168. Liu, S.; Zhou, Y.; Wang, C.; Shen, J.; Zheng, Y. Prediction of lymph node status in patients with early-stage cervical cancer based on radiomic features of magnetic resonance imaging (MRI) images. BMC Med. Imaging 2023, 23, 101. [Google Scholar] [CrossRef]
  169. Senthilkumar, G.; Ramakrishnan, J.; Frnda, J.; Ramachandran, M.; Gupta, D.; Tiwari, P.; Shorfuzzaman, M.; Mohammed, M.A. Incorporating Artificial Fish Swarm in Ensemble Classification Framework for Recurrence Prediction of Cervical Cancer. IEEE Access 2021, 9, 83876–83886. [Google Scholar] [CrossRef]
  170. Suvanasuthi, R.; Therasakvichya, S.; Kanchanapiboon, P.; Promptmas, C.; Chimnaronk, S. Analysis of precancerous lesion-related microRNAs for early diagnosis of cervical cancer in the Thai population. Sci. Rep. 2025, 15, 142. [Google Scholar] [CrossRef]
  171. Du, S.; Zhao, Y.; Lv, C.; Wei, M.; Gao, Z.; Meng, X. Applying Serum Proteins and MicroRNA as Novel Biomarkers for Early-Stage Cervical Cancer Detection. Sci. Rep. 2020, 10, 9033. [Google Scholar] [CrossRef]
  172. Liu, Y.; Dong, T.f.; Li, P.j.; Chen, L.b.; Song, T. MRI-based radiomics features for the non-invasive prediction of FIGO stage in cervical carcinoma: A multi-center study. Magn. Reson. Imaging 2024, 110, 170–175. [Google Scholar] [CrossRef] [PubMed]
  173. Yi, J.; Lei, X.; Zhang, L.; Zheng, Q.; Jin, J.; Xie, C.; Jin, X.; Ai, Y. The Influence of Different Ultrasonic Machines on Radiomics Models in Prediction Lymph Node Metastasis for Patients with Cervical Cancer. Technol. Cancer Res. Treat. 2022, 21, 15330338221118412. [Google Scholar] [CrossRef]
  174. Ye, Y.; Li, M.; Pan, Q.; Fang, X.; Yang, H.; Dong, B.; Yang, J.; Zheng, Y.; Zhang, R.; Liao, Z. Machine learning-based classification of deubiquitinase USP26 and its cell proliferation inhibition through stabilizing KLF6 in cervical cancer. Comput. Biol. Med. 2024, 168, 107745. [Google Scholar] [CrossRef]
  175. Guo, L.; Wang, W.; Xie, X.; Wang, S.; Zhang, Y. Machine learning-based models for genomic predicting neoadjuvant chemotherapeutic sensitivity in cervical cancer. Biomed. Pharmacother. 2023, 159, 114256. [Google Scholar] [CrossRef]
  176. Ma, J.H.; Huang, Y.; Liu, L.Y.; Feng, Z. An 8-gene DNA methylation signature predicts the recurrence risk of cervical cancer. J. Int. Med. Res. 2021, 49, 03000605211018443. [Google Scholar] [CrossRef] [PubMed]
  177. Ramesh, K.; Agarwal, P.; Ahuja, V.; Mir, B.A.; Yuriy, S.; Altuwairiqi, M.; Nuagah, S.J. Biomedical Application of Identified Biomarkers Gene Expression Based Early Diagnosis and Detection in Cervical Cancer with Modified Probabilistic Neural Network. Contrast Media Mol. Imaging 2022, 2022, 4946154. [Google Scholar] [CrossRef] [PubMed]
  178. Monthatip, K.; Boonnag, C.; Muangmool, T.; Charoenkwan, K. A machine learning-based prediction model of pelvic lymph node metastasis in women with early-stage cervical cancer. J. Gynecol. Oncol. 2023, 35, e17. [Google Scholar] [CrossRef]
  179. Félix, M.M.; Tavares, M.V.; Santos, I.P.; Batista de Carvalho, A.L.M.; Batista de Carvalho, L.A.E.; Marques, M.P.M. Cervical Squamous Cell Carcinoma Diagnosis by FTIR Microspectroscopy. Molecules 2024, 29, 922. [Google Scholar] [CrossRef]
  180. Kawahara, D.; Nishibuchi, I.; Kawamura, M.; Yoshida, T.; Koh, I.; Tomono, K.; Sekine, M.; Takahashi, H.; Kikuchi, Y.; Kudo, Y.; et al. Radiomic Analysis for Pretreatment Prediction of Recurrence Post-Radiotherapy in Cervical Squamous Cell Carcinoma Cancer. Diagnostics 2022, 12, 2346. [Google Scholar] [CrossRef]
  181. Zhang, X.; Yan, W.; Jin, H.; Yu, B.; Zhang, H.; Ding, B.; Chen, X.; Zhang, Y.; Xia, Q.; Meng, D.; et al. Transcriptional and post-transcriptional regulation of CARMN and its anti-tumor function in cervical cancer through autophagic flux blockade and MAPK cascade inhibition. J. Exp. Clin. Cancer Res. 2024, 43, 305. [Google Scholar] [CrossRef]
  182. Park, S.H.; Hahm, M.H.; Bae, B.K.; Chong, G.O.; Jeong, S.Y.; Na, S.; Jeong, S.; Kim, J.C. Magnetic resonance imaging features of tumor and lymph node to predict clinical outcome in node-positive cervical cancer: A retrospective analysis. Radiat. Oncol. 2020, 15, 86. [Google Scholar] [CrossRef] [PubMed]
  183. Zhang, X.F.; Wu, H.y.; Liang, X.W.; Chen, J.L.; Li, J.; Zhang, S.; Liu, Z. Deep-learning-based radiomics of intratumoral and peritumoral MRI images to predict the pathological features of adjuvant radiotherapy in early-stage cervical squamous cell carcinoma. BMC Women’s Health 2024, 24, 182. [Google Scholar] [CrossRef]
  184. Cai, Z.; Li, S.; Xiong, Z.; Lin, J.; Sun, Y. Multimodal MRI-based deep-radiomics model predicts response in cervical cancer treated with neoadjuvant chemoradiotherapy. Sci. Rep. 2024, 14, 19090. [Google Scholar] [CrossRef]
  185. William, W.; Ware, A.; Basaza-Ejiri, A.H.; Obungoloch, J. A review of image analysis and machine learning techniques for automated cervical cancer screening from pap-smear images. Comput. Methods Programs Biomed. 2018, 164, 15–22. [Google Scholar] [CrossRef]
  186. Al Mudawi, N.; Alazeb, A. A Model for Predicting Cervical Cancer Using Machine Learning Algorithms. Sensors 2022, 22, 4132. [Google Scholar] [CrossRef]
  187. Rhee, D.J.; Jhingran, A.; Rigaud, B.; Netherton, T.; Cardenas, C.E.; Zhang, L.; Vedam, S.; Kry, S.; Brock, K.K.; Shaw, W.; et al. Automatic contouring system for cervical cancer using convolutional neural networks. Med. Phys. 2020, 47, 5648–5658. [Google Scholar] [CrossRef] [PubMed]
  188. Yang, C.; Qin, L.H.; Xie, Y.E.; Liao, J.Y. Deep learning in CT image segmentation of cervical cancer: A systematic review and meta-analysis. Radiat. Oncol. 2022, 17, 175. [Google Scholar] [CrossRef] [PubMed]
  189. Rahimi, M.; Akbari, A.; Asadi, F.; Emami, H. Cervical cancer survival prediction by machine learning algorithms: A systematic review. BMC Cancer 2023, 23, 341. [Google Scholar] [CrossRef]
  190. Chakraborty, C.; Bhattacharya, M.; Pal, S.; Lee, S.S. From machine learning to deep learning: Advances of the recent data-driven paradigm shift in medicine and healthcare. Curr. Res. Biotechnol. 2024, 7, 100164. [Google Scholar] [CrossRef]
  191. Rahman, A.; Debnath, T.; Kundu, D.; Khan, M.S.I.; Aishi, A.A.; Sazzad, S.; Sayduzzaman, M.; Band, S.S.; Rahman, A.; Debnath, T.; et al. Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities. AIMS Public Health 2024, 11, 58–109. [Google Scholar] [CrossRef]
  192. Pandey, M.; Fernandez, M.; Gentile, F.; Isayev, O.; Tropsha, A.; Stern, A.C.; Cherkasov, A. The transformational role of GPU computing and deep learning in drug discovery. Nat. Mach. Intell. 2022, 4, 211–221. [Google Scholar] [CrossRef]
  193. Wang, Z.; Liu, K.; Li, J.; Zhu, Y.; Zhang, Y. Various Frameworks and Libraries of Machine Learning and Deep Learning: A Survey. Arch. Comput. Methods Eng. 2024, 31, 1–24. [Google Scholar] [CrossRef]
  194. Mienye, I.D.; Swart, T.G. A Comprehensive Review of Deep Learning: Architectures, Recent Advances, and Applications. Information 2024, 15, 755. [Google Scholar] [CrossRef]
  195. Alzubaidi, L.; Bai, J.; Al-Sabaawi, A.; Santamaría, J.; Albahri, A.S.; Al-dabbagh, B.S.N.; Fadhel, M.A.; Manoufali, M.; Zhang, J.; Al-Timemy, A.H.; et al. A survey on deep learning tools dealing with data scarcity: Definitions, challenges, solutions, tips, and applications. J. Big Data 2023, 10, 46. [Google Scholar] [CrossRef]
  196. Hakami, A. Strategies for overcoming data scarcity, imbalance, and feature selection challenges in machine learning models for predictive maintenance. Sci. Rep. 2024, 14, 9645. [Google Scholar] [CrossRef] [PubMed]
  197. Vazquez, B.; Hevia-Montiel, N.; Perez-Gonzalez, J.; Haro, P. Weighted–VAE: A deep learning approach for multimodal data generation applied to experimental T. cruzi infection. PLoS ONE 2025, 20, e0315843. [Google Scholar] [CrossRef] [PubMed]
  198. Apicella, A.; Isgrò, F.; Prevete, R. Don’t Push the Button! Exploring Data Leakage Risks in Machine Learning and Transfer Learning. arXiv 2024, arXiv:2401.13796. [Google Scholar] [CrossRef]
  199. Yoshizawa, R.; Yamamoto, K.; Ohtsuki, T. Investigation of Data Leakage in Deep-Learning-Based Blood Pressure Estimation Using Photoplethysmogram/Electrocardiogram. IEEE Sens. J. 2023, 23, 13311–13318. [Google Scholar] [CrossRef]
  200. Tampu, I.E.; Eklund, A.; Haj-Hosseini, N. Inflation of test accuracy due to data leakage in deep learning-based classification of OCT images. Sci. Data 2022, 9, 580. [Google Scholar] [CrossRef]
  201. Kapoor, S.; Narayanan, A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns 2023, 4, 100804. [Google Scholar] [CrossRef]
  202. Xu, Y.; Goodacre, R. On Splitting Training and Validation Set: A Comparative Study of Cross-Validation, Bootstrap and Systematic Sampling for Estimating the Generalization Performance of Supervised Learning. J. Anal. Test. 2018, 2, 249–262. [Google Scholar] [CrossRef] [PubMed]
  203. Oeding, J.F.; Krych, A.J.; Pearle, A.D.; Kelly, B.T.; Kunze, K.N. Medical Imaging Applications Developed Using Artificial Intelligence Demonstrate High Internal Validity Yet Are Limited in Scope and Lack External Validation. Arthrosc. J. Arthrosc. Relat. Surg. 2025, 41, 455–472. [Google Scholar] [CrossRef] [PubMed]
  204. Santos, C.S.; Amorim-Lopes, M. Externally validated and clinically useful machine learning algorithms to support patient-related decision-making in oncology: A scoping review. BMC Med. Res. Methodol. 2025, 25, 45. [Google Scholar] [CrossRef] [PubMed]
  205. Shaharuddin, S.; Abdul Maulud, K.N.; Syed Abdul Rahman, S.A.F.; Che Ani, A.I.; Pradhan, B. The role of IoT sensor in smart building context for indoor fire hazard scenario: A systematic review of interdisciplinary articles. Internet Things 2023, 22, 100803. [Google Scholar] [CrossRef]
  206. Woldaregay, A.Z.; Årsand, E.; Walderhaug, S.; Albers, D.; Mamykina, L.; Botsis, T.; Hartvigsen, G. Data-driven modeling and prediction of blood glucose dynamics: Machine learning applications in type 1 diabetes. Artif. Intell. Med. 2019, 98, 109–134. [Google Scholar] [CrossRef]
  207. Lawan, A.A.; Cavus, N.; Yunusa, R.; Abdulrazak, U.I.; Tahir, S. Chapter 12—Fundamentals of machine-learning modeling for behavioral screening and diagnosis of autism spectrum disorder. In Neural Engineering Techniques for Autism Spectrum Disorder; El-Baz, A.S., Suri, J.S., Eds.; Academic Press: Cambridge, MA, USA, 2023; pp. 253–268. [Google Scholar] [CrossRef]
  208. Viering, T.; Loog, M. The Shape of Learning Curves: A Review. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 7799–7819. [Google Scholar] [CrossRef]
  209. Taylor, M.J. Legal Bases For Disclosing Confidential Patient Information for Public Health: Distinguishing Between Health Protection And Health Improvement. Med. Law Rev. 2015, 23, 348–374. [Google Scholar] [CrossRef] [PubMed]
  210. Sharif, M.I.; Mehmood, M.; Uddin, M.P.; Siddique, K.; Akhtar, Z.; Waheed, S. Federated Learning for Analysis of Medical Images: A Survey. J. Comput. Sci. 2024, 20, 1610–1621. [Google Scholar] [CrossRef]
  211. Liu, X.; Xie, L.; Wang, Y.; Zou, J.; Xiong, J.; Ying, Z.; Vasilakos, A.V. Privacy and Security Issues in Deep Learning: A Survey. IEEE Access 2021, 9, 4566–4593. [Google Scholar] [CrossRef]
  212. Golla, G. Security and Privacy Challenges in Deep Learning Models. arXiv 2023, arXiv:2311.13744. [Google Scholar] [CrossRef]
  213. Meng, H.; Wagner, C.; Triguero, I. Explaining time series classifiers through meaningful perturbation and optimisation. Inf. Sci. 2023, 645, 119334. [Google Scholar] [CrossRef]
  214. Teng, Q.; Liu, Z.; Song, Y.; Han, K.; Lu, Y. A survey on the interpretability of deep learning in medical diagnosis. Multimed. Syst. 2022, 28, 2335–2355. [Google Scholar] [CrossRef] [PubMed]
  215. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
  216. Beam, A.L.; Manrai, A.K.; Ghassemi, M. Challenges to the Reproducibility of Machine Learning Models in Health Care. JAMA 2020, 323, 305–306. [Google Scholar] [CrossRef]
  217. Mislan, K.A.S.; Heer, J.M.; White, E.P. Elevating The Status of Code in Ecology. Trends Ecol. Evol. 2016, 31, 4–7. [Google Scholar] [CrossRef] [PubMed]
  218. Torres-Poveda, K.; Piña-Sánchez, P.; Vallejo-Ruiz, V.; Lizano, M.; Cruz-Valdez, A.; Juárez-Sánchez, P.; Garza-Salazar, J.d.l.; Manzo-Merino, J. Molecular Markers for the Diagnosis of High-Risk Human Papillomavirus Infection and Triage of Human Papillomavirus-Positive Women. Rev. Investig. Clín. 2020, 72, 198–212. [Google Scholar] [CrossRef]
  219. Goodman, A. Deep-Learning-Based Evaluation of Dual Stain Cytology for Cervical Cancer Screening: A New Paradigm. J. Natl. Cancer Inst. 2021, 113, 1451–1452. [Google Scholar] [CrossRef]
  220. Desai, K.T.; Befano, B.; Xue, Z.; Kelly, H.; Campos, N.G.; Egemen, D.; Gage, J.C.; Rodriguez, A.C.; Sahasrabuddhe, V.; Levitz, D.; et al. The development of “automated visual evaluation” for cervical cancer screening: The promise and challenges in adapting deep-learning for clinical testing: Interdisciplinary principles of automated visual evaluation in cervical screening. Int. J. Cancer 2022, 150, 741–752. [Google Scholar] [CrossRef]
  221. U.S. Food and Drug Administration. Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. Available online: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological (accessed on 28 May 2025).
  222. World Health Organization (WHO). Atlas of Visual Inspection of the Cervix with Acetic Acid for Screening, Triage, and Assessment for Treatment. Available online: https://screening.iarc.fr/atlasvia.php (accessed on 28 May 2025).
  223. Li, J.; Hu, P.; Gao, H.; Shen, N.; Hua, K. Classification of cervical lesions based on multimodal features fusion. Comput. Biol. Med. 2024, 177, 108589. [Google Scholar] [CrossRef]
  224. Mennella, C.; Maniscalco, U.; De Pietro, G.; Esposito, M. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon 2024, 10, e26297. [Google Scholar] [CrossRef]
  225. Wu, T.; Lucas, E.; Zhao, F.; Basu, P.; Qiao, Y. Artificial intelligence strengthens cervical cancer screening - present and future. Cancer Biol. Med. 2024, 21, 864–879. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the search procedure following PRISMA-ScR (PRISMA Extension for Scoping Reviews).
Figure 1. Flowchart of the search procedure following PRISMA-ScR (PRISMA Extension for Scoping Reviews).
Diagnostics 15 01543 g001
Figure 2. Distribution of the selected articles per clinical applications using ML and DL for CC.
Figure 2. Distribution of the selected articles per clinical applications using ML and DL for CC.
Diagnostics 15 01543 g002
Figure 3. Distribution of targets grouped by clinical applications. The targets appear in order of highest to lowest frequency by clinical application.
Figure 3. Distribution of targets grouped by clinical applications. The targets appear in order of highest to lowest frequency by clinical application.
Diagnostics 15 01543 g003
Figure 4. Distribution of the selected articles per prediction task in CC.
Figure 4. Distribution of the selected articles per prediction task in CC.
Diagnostics 15 01543 g004
Figure 5. Distribution of the most common ML and DL models used per prediction task.
Figure 5. Distribution of the most common ML and DL models used per prediction task.
Diagnostics 15 01543 g005
Figure 6. Distribution of studies grouped by ML, DL, and the fusion of ML and DL: (A) visualizes the distribution of studies according to implementation techniques; (B) lists the number of articles published per year.
Figure 6. Distribution of studies grouped by ML, DL, and the fusion of ML and DL: (A) visualizes the distribution of studies according to implementation techniques; (B) lists the number of articles published per year.
Diagnostics 15 01543 g006
Figure 7. Distribution of the most common metrics used to measure model performance. AUC: area under the curve; DICE: Dice similarity coefficient; AUROC: area under the receiver operating characteristic curve; MAE: mean absolute error.
Figure 7. Distribution of the most common metrics used to measure model performance. AUC: area under the curve; DICE: Dice similarity coefficient; AUROC: area under the receiver operating characteristic curve; MAE: mean absolute error.
Diagnostics 15 01543 g007
Figure 8. Distribution of explainability tools used in the literature.
Figure 8. Distribution of explainability tools used in the literature.
Diagnostics 15 01543 g008
Figure 9. Distribution of the most common databases found in the literature.
Figure 9. Distribution of the most common databases found in the literature.
Diagnostics 15 01543 g009
Figure 10. Most common limitations identified in the studies.
Figure 10. Most common limitations identified in the studies.
Diagnostics 15 01543 g010
Figure 11. Distribution of available code and dataset.
Figure 11. Distribution of available code and dataset.
Diagnostics 15 01543 g011
Figure 12. Applications of ML and DL in cervical cancer prediction across countries and targets.
Figure 12. Applications of ML and DL in cervical cancer prediction across countries and targets.
Diagnostics 15 01543 g012
Figure 13. Most common study designs identified in the literature.
Figure 13. Most common study designs identified in the literature.
Diagnostics 15 01543 g013
Figure 14. Most frequently studied populations.
Figure 14. Most frequently studied populations.
Diagnostics 15 01543 g014
Figure 15. Most frequently involved medical specialists in studies.
Figure 15. Most frequently involved medical specialists in studies.
Diagnostics 15 01543 g015
Figure 16. Overview of specialists, inputs, and outputs detected in the reviewed studies by clinical application. Created with BioRender.com.
Figure 16. Overview of specialists, inputs, and outputs detected in the reviewed studies by clinical application. Created with BioRender.com.
Diagnostics 15 01543 g016
Table 1. The quality assessment form used in the scoping review for observational studies assigned scores as follows for questions 1, 2, 4, 6, 7, 8, 9, 10, and 11: 0, no description; 1, limited description; and 2, comprehensive description.
Table 1. The quality assessment form used in the scoping review for observational studies assigned scores as follows for questions 1, 2, 4, 6, 7, 8, 9, 10, and 11: 0, no description; 1, limited description; and 2, comprehensive description.
SectionQuestion
IntroductionQ1 Is the scientific background adequately described?
Q2 Are the goals clearly outlined?
MethodsQ3 What is the study design? (1 cross-sectional; 2 case–control;
3 cohort study; 4 clinical trial)
Q4 Are the inclusion criteria and participant selection
clearly outlined?
Q5 Sample size (0 if <20, 1 if between 20 and 100, 2 if >100)
Q6 Is the method (validity) explained?
Q7 Are the statistical analyses suitable?
ResultsQ8 Are subjects’ characteristics provided?
Q9 Are the results understandable?
DiscussionQ10 Are the study results compared and discussed in relation
to other studies published in the literature?
Q11 Are study limitations discussed?
Table 2. Quality assessment analysis using STROBE scale for observational studies.
Table 2. Quality assessment analysis using STROBE scale for observational studies.
Ref.Q1Q2Q3Q4Q5Q6Q7Q8Q9Q10Q11Total
Kim et al. 2022 [36]2211222221017
Yuan et al. 2022 [37]1211101111010
Kruczkowski et al. 2022 [38]2212210111013
Yoganathan et al [32]1211120222014
Fu et al. 2022 [39]1221212221016
Ma et al. 2022 [40]2212212222220
Liu et al. 2022 [41]1211201111011
Nambu et al. 2022 [42]2211212122218
Drokow et al. 2022 [43]1211201111011
Liu et al. 2022 [44]2211211221217
Mehmood et al. 2021 [45]2210211021012
Ali et al. 2021 [46]1211201111011
Chu et al. 2021 [47]2232222222223
Dong et al. 2021 [48]1211201111011
Fick et al. 2021 [49]1211201111011
Chen et al. 2021 [50]2211101122013
Cheng et al. 2021 [51]2211211122015
Rahaman et al. 2021 [52]2210211111214
Park et al. 2021 [53]2211212222219
Tian et al. 2021 [54]2210210011010
Da-ano et al. 2021 [55]2231211211218
Kaushik et al. 2021 [56]2210211111012
Shi et al. 2021 [57]2231211111217
Ding et al. 2021 [58]2210211111012
Zhu et al. 2021 [59]2210211111012
Chandran et al. 2021  [60]2210212111215
Jian et al. 2021 [61]2210211111113
Christley et al. 2021 [62]2231212111117
Ke et al. 2021 [63]2211211111215
Rigaud et al. 2021 [64]2211311111216
Urushibara et al. 2021 [65]2211211111215
Wentzensen et al. 2021 [66]2211211111215
Wang et al. 2020 [67]2211211111215
Liu et al. 2020 [68]2211211111114
Mao et al. 2020 [69]2211111111113
Xue et al. 2020 [70]2211211111215
Bao et al. 2020 [71]2211211111215
Cho et al. 2020 [72]2211211111215
Ju et al. 2020 [73]2211211111114
Kanai et al. 2020 [74]2211111111113
Yuan et al. 2020  [75]2211211111114
Hu et al. 2020 [76]2211211111114
Wu et al. 2020 [77]2231211111217
Wang et al. 2020 [78]2231211111116
Kudva et al. 2020 [79]2211211111114
Ijaz et al. 2020 [80]2211211111114
Bae et al. 2020 [81]2211211111114
Sornapudi et al. 2020 [82]2211211111114
Shao et al. 2020 [83]2211111111113
Takada et al. 2020 [84]2231111111216
Lin et al. 2020 [85]2211211111114
Jihong et al- 2020 [86]2211211111114
Liu et al. 2020 [87]2211211111114
Shen et al. 2019 [88]2231211111217
Pathania et al. 2019 [89]2211211111215
Aljakouch et al. 2019 [90]2211111111113
Shanthi et al. 2019 [22]2211211111114
Dong et al. 2019 [91]2211211111114
Hu et al. 2019 [92]2231211111217
Dong et al. 2019  [93]2211211111114
Geetha et al. 2019 [94]2211211111114
Shen et al. 2019 [95]2211111111113
Zhang et al. 2019 [96]2211211111114
Chen et al. 2019 [97]2211111111113
Matsuo et al. 2019 [98]2231211111116
Araujo et al. 2019 [99]2211211111114
Kan et al. 2019 [100]2231211111116
Zhen et al. 2017 [101]2231211111116
Xie et al. 2018[102]2211111111113
Kudva et al. 2018 [103]2211211111114
Wei et al. 2017 [104]2211111111113
Guo et al. 2016 [105]2211111111113
Zhao et al. 2016 [106]2211211111114
Weegar et al. 2015 [107]2231211111116
Kahng et al. 2015 [108]2231211111116
Mu et al. 2015 [109]1211111111112
Zhi et al. 2015 [110]2211211111114
Dong et al. 2015 [111]2211211111114
Mariarputham et al. 2015 [112]2211211111114
Xin et al. 2023 [113]2241211111117
Robinson et al. 2023 [114]2211111111113
Tian et al. 2023 [115]2211211111114
Kaur et al. 2023 [116]2211211111114
Ji et al. 2023 [117]2211211111114
Liu et al. 2023 [118]2231211111217
Kang et al. 2023 [119]2211211111114
Wu et al. 2023 [120]2211211111114
Ince et al. 2023 [121]2211111111113
Kurita et al. 2023 [122]2211211111114
Zhu et al. 2023 [123]2231211111217
Yang et al. 2023 [124]2211111111113
Devi et al. 2023 [125]2211211111114
Chen et al. 2023 [126]2231211111217
Salehi et al. 2023 [127]2211111111113
Yu et al. 2023 [128]2231211111116
Liu et al. 2023 [129]2241211111117
Wang et al. 2023 [130]2231211111217
Yu et al. 2023 [131]2211211111215
Huang et al. 2023 [132]2211111111113
Cao et al. 2023 [133]2211211111114
Chen et al. 2022 [134]2231211111217
He et al. 2023 [135]2211111111113
Qian et al. 2022 [136]2231211111217
Dong et al. 2022 [137]2231211111217
Ji et al. 2022 [138]2211111111113
Ming et al. 2022 [139]2211211111114
Wang et al. 2022 [140]2211211111215
Ma et al. 2022 [141]2211211111114
Chang et al. 2022 [142]2211211111114
Tao et al. 2022 [143]2211211111114
Qilin et al. 2022 [144]2211211111114
Xue et al. 2023 [145]2211111111113
Gay et al. 2023 [146]2211211111114
Shen et al. 2024 [147]2211211111114
Aljrees et al. 2024 [148]2211111111113
Munshi et al. 2024 [149]2211111111113
Jeong et al. 2024 [150]2211111111113
Stegmuller et al. 2023 [151]2211111111113
Wu et al. 2024 [152]2231211111217
Wu et al. 2025 [153]2241211111218
Xin et al. 2024 [154]2231211111217
Chen et al. 2018 [155]2211111111214
Chen et al. 2020 [156]2211211111215
Matsuo et al. 2019 [98]2231211111217
Zheng et al. 2024 [157]2211211111215
Brenes et al. 2024 [158]2211211111215
Shandilya et al. 2024 [159]2211211111215
Mathivanan et al. 2024 [160]2211211111215
Wang et al. 2021 [161]2231211111217
Dong et al. 2025 [162]2211211111215
Xiao et al. 2023 [163]2211211111215
He et al. 2024 [164]2231211111217
Wang et al. 2024 [165]2231211111217
Wu et al. 2021 [166]2231211111217
Namalinzi et al. 2024 [167]2211211111215
Liu et al. 2023 [168]2211211111215
Senthilkumar et al. 2021 [169]2211211111215
Suvanasuthi et al. 2025 [170]2211111111214
Du et al. 2020 [171]2211211111215
Liu et al. 2024 [172]2211211111215
Yi et al. 2022 [173]2211211111215
Ye et al. 2024 [174]2211211111215
Guo et al. 2023 [175]2211211111215
Hang et al. 2021 [176]2231111111216
Ramesh et al. 2022 [177]2211211111215
Monthatip et al. 2023 [178]2211211111215
Felix et al. 2024 [179]2211111111214
Kawahara et al. 2022 [180]2231111111216
Zhang et al. 2024 [181]2211111111214
Park et al. 2020 [182]2231111111216
Kruczkowski et al. 2022 [38]2211111111214
Zhang et al. 2024 [183]2211111111214
Cai et al. 2024 [184]2231211111217
Table 3. Summary of ML-based applications reported in the literature.
Table 3. Summary of ML-based applications reported in the literature.
Ref.YearClinical
Application
Prediction
Task
TargetDatasetsNo. Folds
for CV
Best Performance on Test SetExtval
ModelMetric
[170]2025DiagnosisClassificationScreening for CCDNA10RFACC = 90.9%-
[148]2024DiagnosisClassificationScreening for CCClinical history5KNNACC = 99%-
[149]2024DiagnosisClassificationScreening for CCClinical history5SVMACC = 99%-
[154]2024PrognosisClassificationSurvivalMRI images-RFACC = 86%-
[164]2024PrognosisClassificationCancer progressionClinical history10RFACC = 86%-
[165]2024TreatmentClassificationRecurrence
(Cancer progression)
DNA10RFACC = 84%-
[167]2024DiagnosisClassificationScreening for CCClinical history-RFACC = 90%-
[172]2024PrognosisClassificationScreening for CCMRI images10SVMAUC = 76%Yes
[174]2024TreatmentClassificationTherapeutic dose
and planning
DNA10RFACC = 96%-
[179]2024DiagnosisClassificationStages of CCDose volume5SVMACC = 96%-
[181]2024PrognosisClassificationStages of CCDNA-LightgbmAUC = 98.7%-
[113]2023PrognosisClassificationCancer progressionMRI images5SVMACC = 90%Yes
[121]2023DiagnosisClassificationStages of CCMRI images5SVMACC > 83%-
[123]2023DiagnosisClassificationStages of CCSpectral data10SVMACC = 95%-
[124]2023TreatmentClassificationTherapeutic dose
and planning
CT images-AAAACC = 73%-
[125]2023DiagnosisClassificationStages of CCClinical history5LR
DT
ACC > 88%-
[128]2023PrognosisRegressionSurvivalHistopatology images
Clinical history
5XGBoostAUC = 83%-
[135]2023DiagnosisClassificationStages of CCHistopathology images-SVMACC > 87%-
[168]2023PrognosisClassificationCancer progressionMRI images
Clinical history
5MNBACC = 77%-
[175]2023TreatmentClassificationCancer progressionDNA-RF--
[178]2023PrognosisClassificationCancer progressionCT images
Clinical history
10SVMACC = 90.1%-
[38]2022DiagnosisClassificationStages of CCInterferometry3NBACC = 92%-
[39]2022DiagnosisClassificationScreening for CCColposcopy images
Cytology images
HPV test
-MLRAUC = 92.1%-
[138]2022DiagnosisClassificationStages of CCFluorescence images-KNNSEN = 90%-
[173]2022PrognosisClassificationCancer progressionUltrasound-SVMACC = 85%-
[177]2022DiagnosisClassificationStages of CCDNA-SVMACC = 91.5%-
[180]2022PrognosisClassificationRecurrence
(Cancer progression)
MRI images5LASSOACC = 93.1%-
[38]2022PrognosisClassificationStages of CCDose volume3NBACC = 92%-
[46]2021DiagnosisClassificationScreening for CCBiopsy results
Cytology images
-RT
RF
KNN
ACC = 95.5%-
[47]2021PrognosisClassificationCancer progressionHistopathology images
Clinical history
5LR
SVM
AUC = 88%-
[55]2021PrognosisClassificationSurv forestMRI images
CT images
Clinical history
10RFAUC > 84%-
[56]2021PrognosisClassificationCancer progressionDNA5RidgeACC = 84.7%-
[58]2021PrognosisClassificationSurvivalDNA10SVMAUC > 91%-
[62]2021DiagnosisClassificationCancer progressionDNALOOCVLRACC = 95%-
[166]2021DiagnosisClassificationScreening for CCUltrasound-LRAUC = 91%Yes
[176]2021PrognosisClassificationRecurrence
(Cancer progression)
DNA-SVMACC = 83%Yes
[74]2020DiagnosisClassificationStages of CCCytology images10LSVMACC = 84%-
[78]2020PrognosisClassificationCancer progressionMRI images10SVMC-index 0.96-
[80]2020DiagnosisClassificationScreening for CCClinical history10RFACC > 95%-
[81]2020DiagnosisClassificationStages of CCColposcopy images10KNNACC = 80%-
[156]2020PrognosisClassificationCancer progressionCT iimages-SVMACC = 76%-
[171]2020DiagnosisClassificationScreening for CCDNA10DTSEN = 88.6%Yes
[182]2020PrognosisClassificationSurvivalMRI images-SFAUC = 79.6%-
[94]2019DiagnosisClassificationScreening for CCClinical history10RFACC > 94%-
[100]2019PrognosisClassificationCancer progressionMRI images
Clinical history
-SVMAUROC = 75%-
[104]2017DiagnosisClassificationStages of CCHistopathology images-SVMACC > 90%-
[105]2016DiagnosisClassificationStages of CCHistopathology images10SVMACC = 88.5%-
[106]2016DiagnosisClassificationScreening for CCCytology images10SVMACC = 98%-
[108]2015PrognosisClassificationCancer progressionClinical history10SVMACC = 74%-
[109]2015TreatmentImage
segmentation
Segmentation of
targets/OARs
CT imagesLOOCVSVMDSC = 91.78-
[111]2015PrognosisClassificationCancer progressionDNA5SVMACC = 80%-
[112]2015DiagnosisClassificationStages of CCCytology images10SVMPrecision >87%-
Table 4. Summary of DL-based applications reported in the literature.
Table 4. Summary of DL-based applications reported in the literature.
Ref.YearClinical
Application
Prediction
Task
TargetDatasetsNo. Folds
For CV
Best Performance on Test SetExternalval
ModelMetric
[147]2024TreatmentClassificationTherapeutic dose
and planning
CT images
Treatment plan
Dose volume
-U-NetAUC = 94%-
[150]2024PrognosisClassificationCancer progressionMRI images5CNNACC = 78%-
[152]2024PrognosisRegressionCancer progressionCT images5DNNACC = 75%-
[153]2024TreatmentImage
Segmentation
Delineation
of the CTV
CT images-ResCANetDSC = 74.8Yes
[157]2024DiagnosisClassificationStages of CCDNA-EfficientNet
DenseNet
InceptionNet
ACC = 94.4%-
[158]2024DiagnosisImage
Segmentation
Screening for CCColposcopy images-Efficient U-Net
LSTM-Attention
ACC = 87%-
[159]2024DiagnosisClassificationStages of CCCytology images-CNNACC = 99.11%-
[183]2024TreatmentClassificationTherapeutic dose
and planning
MRI images10ResNet101
MLP
AUC = 87%-
[114]2023DiagnosisImage
Segmentation
Stages of CCInterferometric
Measurements
LOOCVCNNACC = 81%-
[115]2023TreatmentImage
Segmentation
Delineation
of the CTV
CT images5AFNDSC > 88-
[116]2023DiagnosisClassificationStages of CCCytology images-MLNetACC > 99%Yes
[117]2023DiagnosisImage
Segmentation
Screening for CCCytology images-CNNDSC = 0.94-
[118]2023TreatmentClassificationTherapeutic dose
and planning
Histopathology images-ViT
RNN
ACC = 90%Yes
[119]2023DiagnosisClassificationStages of CCSpectral data5CNNACC = 94%-
[120]2023DiagnosisClassificationStages of CCSpectral dataLOOCVDBNACC = 93%-
[122]2023DiagnosisClassificationStages of CCCytology images-CNNACC = 87%-
[126]2023DiagnosisClassificationStages of CCColposcopy images-Efficient Net
GRU
ACC = 91%-
[127]2023TreatmentImage
Segmentation
Therapeutic dose
and planning
CT images-CNNJaccard > 0.86-
[129]2023DiagnosisClassificationScreening for CCHistopatology images5CNNACC = 76%-
[130]2023TreatmentImage
Segmentation
Segmentation of
targets/OARs
CT images-CNNDSC = 0.77Yes
[131]2023TreatmentRegressionTherapeutic dose
and planning
CT images-3DResUnetDose = 5%-
[132]2023DiagnosisImage
Segmentation
Segmentation of
targets/OARs
CT images4CNNDSC = 0.80-
[133]2023DiagnosisClassificationStages of CCColposcopy images-CNNACC > 88%-
[145]2023DiagnosisClassificationStages of CCSpectral dataLOOCVDeepLabv3+
D-LinkNet
DSC = 0.95-
[146]2023TreatmentRegressionTherapeutic dose
and planning
CT images-U-NetScore > 1%-
[163]2023DiagnosisClassificationStages of CCClinical history-Stacking modelsAUROC = 87%-
[36]2022DiagnosisClassificationStages of CCColposcopy images5ResNet50ACC = 81.3%-
[37]2022TreatmentRegressionTherapeutic dose
and planning
CT images-ResNetDSC > 0.94-
[32]2022TreatmentImage
Segmentation
Segmentation of
targets/OARs
MRI images-Inception
ResNetv2
DSC = 0.72-
[40]2022TreatmentImage
Segmentation
Segmentation of
targets/OARs
CT images-CNNDSC > 0.70-
[41]2022DiagnosisClassificationScreening for CCCytology images5CNNACC = 95.4%-
[42]2022DiagnosisClassificationScreening for CCCytology images-YOLO
ResNet
ACC = 90.5%-
[43]2022DiagnosisClassificationScreening for CCClinical history5VotingACC = 96.6%-
[44]2022DiagnosisImage
Segmentation
Screening for CCColposcopy images-DeepLab V3+ACC = 91.2%-
[134]2022PrognosisRegressionSurvivalHistopathology images-CNNAUC = 80%-
[136]2022PrognosisRegressionCancer progressionMRI images
PET images
5CNNAUC = 84%-
[137]2022PrognosisClassificationCancer progressionClinical history-MLPAUC = 82%-
[139]2022DiagnosisClassificationStages of CCCT images5CNNACC > 60%-
[140]2022TreatmentImage
Segmentation
Segmentation of
targets/OARs
CT images-CNNDSC = 0.88-
[141]2022TreatmentImage
Segmentation
Delineation
of the CTV
CT images-VN-NetDSC = 0.81-
[142]2022DiagnosisClassificationStages of CCMRI images5CycleGANAUC = 89%-
[143]2022DiagnosisClassificationStages of CCCytology images-CNNSEN = 89%-
[144]2022TreatmentRegressionTherapeutic dose
and planning
Dose volume-U-NetMAE = 2.4Yes
[161]2022TreatmentImage
Segmentation
Segmentation of
targets/OARs
MRI images-CNNPrecision = 93%-
[162]2022DiagnosisClassificationStages of CCColposcopy images-Dense-U-NetACC = 89%Yes
[50]2021DiagnosisImage
Segmentation
Screening for CCCytology images-CNNDSC = 0.92-
[51]2021DiagnosisClassificationStages of CCCytology images-ResNet
RNN
CNN
SEN = 95.1%Yes
[52]2021DiagnosisClassificationStages of CCCytology images-ResNetACC > 90%Yes
[53]2021DiagnosisClassificationStages of CCColposcopy images5ResNetAUC = 97%-
[54]2021DiagnosisClassificationHPV typeDNA-CNNAUROC = 85%Yes
[57]2021TreatmentImage
Segmentation
Delineation
of the CTV
CT images3U-Net
CNN
DSC = 0.734Yes
[59]2021DiagnosisImage
Segmentation
Screening for CCCytology images10YOLO v3SEN = 92%Yes
[60]2021DiagnosisClassificationScreening for CCColposcopy images-CNNACC = 92%-
[61]2021PrognosisClassificationCancer progressionMRI images10CNNAUC = 91%-
[63]2021DiagnosisClassificationScreening for CCCytology images5CNNACC = 94%-
[64]2021TreatmentImage
Segmentation
Segmentation of
targets/OARs
CT images-CNNDSC = 0.85Yes
[65]2021DiagnosisClassificationStages of CCMRI images-XceptionNetAUC = 93%-
[66]2021DiagnosisClassificationScreening for CCCytology images5CNNAUC = 77%-
[67]2020TreatmentImage
Segmentation
Delineation
of the CTV
CT images-CNNDSC > 0.81-
[68]2020TreatmentImage
Segmentation
Therapeutic dose
and planning
CT images5CNNDSC > 0.82-
[69]2020TreatmentImage
Segmentation
Therapeutic dose
and planning
CT images
Dose volume
-CNNDVH = 0.73-
[70]2020DiagnosisClassificationScreening for CCColposcopy images-Faster R-CNNAUC > 90%-
[71]2020DiagnosisClassificationStages of CCCytology images-CNNSEN = 100%-
[72]2020DiagnosisClassificationStages of CCColposcopy images10ResnetAUC = 78%-
[73]2020TreatmentImage
Segmentation
Segmentation of
targets/OARs
CT images-CNNDSC > 0.87-
[75]2020DiagnosisImage
Segmentation
Stages of CCColposcopy images-CNNACC = 84%-
[76]2020DiagnosisClassificationScreening for CCColposcopy images-RetinaNetAUC = 95%-
[77]2020PrognosisImage
Segmentation
Cancer progressionMRI images-CNNAUC = 93%-
[79]2020DiagnosisClassificationScreening for CCColposcopy images-CNNACC = 91%-
[82]2020DiagnosisClassificationStages of CCCytology images-VGG-19ACC = 95%-
[83]2020DiagnosisClassificationStages of CCMRI images
Treatment plan
LOOCVCNNACC = 94.3%-
[85]2020TreatmentImage
Segmentation
Therapeutic dose
and planning
MRI images5UNetSEN = 89%-
[86]2020TreatmentClassificationTherapeutic dose
and planning
Dose volume-CNNp < 0.017-
[87]2020TreatmentImage
Segmentation
Segmentation of
targets/OARs
CT images-UNetDSC > 0.791Yes
[88]2019PrognosisClassificationCancer progressionCT images7CNNACC = 89%-
[89]2019DiagnosisClassificationStages of CCInterferometric
Measurements
-CNNSEN = 100%-
[90]2019DiagnosisClassificationStages of CCSpectral dataLOOCVCNNACC = 100%-
[22]2019DiagnosisClassificationStages of CCCytology images-CNNACC > 94%-
[91]2019DiagnosisClassificationStages of CCCytology images10AGVFSMACC = 99%-
[92]2019DiagnosisClassificationScreening for CCColposcopy images-CNNAUC = 91%-
[93]2019DiagnosisClassificationStages of CCCT images-CNNACC > 90%-
[95]2019TreatmentRegressionTherapeutic dose
and planning
Dose volume
Histograms
-Reinforcement
learning
Score > 8.5%Yes
[96]2019DiagnosisImage
Segmentation
Screening for CCCytology images5CNNACC = 91%-
[97]2019TreatmentImage
Segmentation
Segmentation of
targets/OARs
CT images5CNNDSC = 0.84-
[98]2019PrognosisRegressionSurvivalClinical history5FFNNMAE = 29.3-
[99]2019DiagnosisImage
Segmentation
Screening for CCCytology images-CNNMAP = 0.936-
[98]2019PrognosisRegressionSurvivalClinical history-FFNNMAE = 29.3-
[102]2018DiagnosisClassificationStages of CCCytology images-CNNPrecision > 89%Yes
[103]2018DiagnosisClassificationScreening for CCColposcopy images-CNNACC = 100%-
[155]2018TreatmentImage
Segmentation
Segmentation of
targets/OARs
CT images
PET images
-Graph cutDSC = 0.83-
[101]2017TreatmentClassificationToxicity prediction
in radiotherapy
CT images
Treatment plan
10VGG16AUC = 89%-
[107]2015DiagnosisClassificationScreening for CCClinical history-NERF1 = 67%-
[110]2015DiagnosisImage
Segmentation
Stages of CCCytology images-GMMDSC = 0.92-
Table 5. Summary of ML and DL-based applications reported in the literature.
Table 5. Summary of ML and DL-based applications reported in the literature.
Ref.YearClinical
Application
Prediction
Task
TargetDatasetsNo. Folds
For CV
Best Performance on Test SetExternalval
ModelMetric
[160]2024DiagnosisClassificationStages of CCCytology images-ResNet152
LR
ACC = 98%-
[184]2024TreatmentImage segmentationSegmentation of
targets/OARs
MRI images-SVM - RF
ResNet50
ACC = 75%-
[151]2023DiagnosisClassificationStages of CCCytology images4KNN
ResNet50
ViT-S/16
ACC > 83%-
[45]2021DiagnosisClassificationScreening for CCClinical history-RF
SNN
ACC = 93.6%-
[48]2021DiagnosisImage segmentationStages of CCHistopathology images5U-Net
SVM
ACC = 94.4%-
[49]2021DiagnosisClassificationStages of CCHistopatology images10CNN
SVM
ACC = 97.4%-
[169]2021PrognosisClassificationRecurrence
(Cancer progression)
DNA10SVM
RNN
ACC = 92%-
Table 6. Main computational challenges in cervical cancer using ML and DL models.
Table 6. Main computational challenges in cervical cancer using ML and DL models.
1. Data availability
2. Data leakage
3. Limited external validation
4. Limited evaluation of model performance
5. Complex data
6. Privacy issues
7. Lack of explainability
8. Lack of reproducibility
Table 7. Main challenges for clinical implementation.
Table 7. Main challenges for clinical implementation.
1. Representativeness of clinical stages of CC in the training of deep learning-based models.
2. Privacy concerns and data security in health care.
3. Integration of AI with clinical workflows for real-time decision-making.
4. Ethical and regulatory considerations.
5. Public perspectives on using AI in their diagnoses and treatment decisions.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vazquez, B.; Rojas-García, M.; Rodríguez-Esquivel, J.I.; Marquez-Acosta, J.; Aranda-Flores, C.E.; Cetina-Pérez, L.d.C.; Soto-López, S.; Estévez-García, J.A.; Bahena-Román, M.; Madrid-Marina, V.; et al. Machine and Deep Learning for the Diagnosis, Prognosis, and Treatment of Cervical Cancer: A Scoping Review. Diagnostics 2025, 15, 1543. https://doi.org/10.3390/diagnostics15121543

AMA Style

Vazquez B, Rojas-García M, Rodríguez-Esquivel JI, Marquez-Acosta J, Aranda-Flores CE, Cetina-Pérez LdC, Soto-López S, Estévez-García JA, Bahena-Román M, Madrid-Marina V, et al. Machine and Deep Learning for the Diagnosis, Prognosis, and Treatment of Cervical Cancer: A Scoping Review. Diagnostics. 2025; 15(12):1543. https://doi.org/10.3390/diagnostics15121543

Chicago/Turabian Style

Vazquez, Blanca, Mariano Rojas-García, Jocelyn Isabel Rodríguez-Esquivel, Janeth Marquez-Acosta, Carlos E. Aranda-Flores, Lucely del Carmen Cetina-Pérez, Susana Soto-López, Jesús A. Estévez-García, Margarita Bahena-Román, Vicente Madrid-Marina, and et al. 2025. "Machine and Deep Learning for the Diagnosis, Prognosis, and Treatment of Cervical Cancer: A Scoping Review" Diagnostics 15, no. 12: 1543. https://doi.org/10.3390/diagnostics15121543

APA Style

Vazquez, B., Rojas-García, M., Rodríguez-Esquivel, J. I., Marquez-Acosta, J., Aranda-Flores, C. E., Cetina-Pérez, L. d. C., Soto-López, S., Estévez-García, J. A., Bahena-Román, M., Madrid-Marina, V., & Torres-Poveda, K. (2025). Machine and Deep Learning for the Diagnosis, Prognosis, and Treatment of Cervical Cancer: A Scoping Review. Diagnostics, 15(12), 1543. https://doi.org/10.3390/diagnostics15121543

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop