Next Article in Journal
Service Function Chain Migration: A Survey
Previous Article in Journal
Educational Robotics and Game-Based Interventions for Overcoming Dyscalculia: A Pilot Study
Previous Article in Special Issue
A Study of COVID-19 Diagnosis Applying Artificial Intelligence to X-Rays Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Exploring Deep Learning Model Opportunities for Cervical Cancer Screening in Vulnerable Public Health Regions

by
Renan Chaves de Lima
1,2,* and
Juarez Antonio Simões Quaresma
3,*
1
Postgraduate Program in Tropical Diseases (PPGDT), Tropical Medicine Center, Federal University of Pará, Belém 66055-240, PA, Brazil
2
Responsible AI Committee, ABRIA—Brazilian Association of Artificial Intelligence, São Paulo 01427-000, SP, Brazil
3
Department of Pathology, Paulista School of Medicine, Federal University of São Paulo, São Paulo 04023-062, SP, Brazil
*
Authors to whom correspondence should be addressed.
Computers 2025, 14(5), 202; https://doi.org/10.3390/computers14050202
Submission received: 5 May 2025 / Revised: 16 May 2025 / Accepted: 19 May 2025 / Published: 21 May 2025

Abstract

:
Deep learning models offer innovative solutions for cervical cancer screening in vulnerable regions such as the Brazilian Amazon. These tools are particularly relevant in areas with limited access to healthcare services, where the high prevalence of the disease severely affects riverine and indigenous populations. Artificial intelligence can overcome the limitations of traditional screening methods, providing faster and more accurate diagnoses. This enables early disease detection and reduces mortality, improving equitable access to healthcare. Furthermore, the application of these technologies complements global efforts to eliminate cervical cancer, aligning with the WHO strategies. This review emphasizes the need for model adaptation to local realities, which is essential to ensure their effectiveness in low-infrastructure areas, reinforcing their potential to reduce health disparities and expand access to quality diagnostics.

1. Introduction

Cervical cancer is one of the major public health challenges faced by the Northern region and is a leading cause of death among women, particularly in developing regions where screening and prevention programs are limited.
Indigenous and riverine populations in these regions encounter significant barriers to accessing screening tests, such as the Pap smear. Studies indicate that screening coverage is limited, often occurring outside the recommended age range or involving inadequate sample collection, leading to a high prevalence of advanced cervical lesions [1].
Inhabitants living in the northern region have the lowest indicators of access to and utilization of healthcare services. The northern region faces a low availability of physicians, with a ratio of 1 per 1000 inhabitants, which is seven times lower than that found in the capitals of the Southern region of the country [2].
Furthermore, geographical accessibility, delays in healthcare services, and the distance between healthcare facilities and place of residence, as well as the level of education, impact the adherence to healthcare services. Additionally, the lack of qualified professionals in certain areas leads to failures in clinical management and a high risk of releasing false-negative results.
Two out of every three women aged between 30 and 49 have never been screened for cervical cancer. The implementation of screening programs is very low in low- and middle-income countries, where the disease burden is higher [3]. Another important point is that although cytology has significantly reduced the incidence of cervical cancer, its limitations in screening must be acknowledged. The reproducibility of cytology is poor, with sensitivity ranging between 43% and 96%, and the processing of cytological samples requires a large number of skilled cytopathologists [4].
The application of artificial intelligence (AI) in healthcare is increasingly important. Low adherence to cervical cancer screening and cytology limitations hinder early detection, especially in low- and middle-income areas. AI, with its data processing and pattern recognition capabilities, can improve screening and diagnosis, addressing human resource shortages and providing more accurate, efficient results. This review aims to explore how deep learning models can offer innovative solutions for cervical cancer screening in vulnerable, resource-limited regions, expanding access to early diagnosis and supporting global public health improvement.

2. Search Strategy and Selection Criteria

We conducted a comprehensive search in the databases Google Scholar, PubMed, SciELO, and Scopus. We also consulted gray literature, including reports/informative documents from PAHO/WHO, documents from Brazilian government agencies, and prominent preprint articles and books relevant to the field. The search terms included: “AI”, “artificial intelligence”, “machine learning”, “deep learning”, “cervical cancer”, “computer vision”, “digital pathology”, “transformers”, “vision transformers (ViTs)”, “CNN”, “public health”, “health innovation”, “automated screening”, “cervical cancer screening”, “neglected diseases”, “tropical diseases”, “Brazilian Amazon”, “early diagnosis”, “remote areas”, “health infrastructure”, “digital health”, “AI market”, “cost-effectiveness”, “health inequality”, “emerging opportunities”, “AI regulation”, “ethics”, “health safety”.
We primarily searched for studies published between 2020 and 2024 to ensure recent and relevant research on AI and its applications in public health, digital pathology, and cervical cancer screening, with no language restriction; however, only articles in Portuguese and English were included. Additionally, key articles prior to 2020 were used to establish the social, historical, and scientific context, providing a foundation to discuss public health technologies and digital pathology, enabling critical analysis of AI evolution and its impact on disease diagnosis and screening in vulnerable populations.
A total of 215 articles were retrieved in the initial searches. After removing 32 duplicates, 183 articles remained for screening. The identified titles and abstracts were independently screened by two reviewers, with disagreements resolved by consensus.
At the end of the full-text screening, 67 articles were selected for comprehensive review, of which 42 peer-reviewed articles were included for detailed analysis in the present study. Among the included articles, those with field validation studies and practical applications were prioritized, while proof-of-concept models were critically discussed to identify limitations and future opportunities. Articles that did not provide empirical studies, systematic reviews, original research, mini-reviews, viewpoints, perspectives, pilot projects, or policy analyses, including health innovations beyond cervical cancer management, as well as diagnostic tools and challenges in areas with limited healthcare infrastructure, were excluded.
It is important to acknowledge that this review presents a narrative synthesis of the literature, focusing on deep learning applications for cervical cancer screening in vulnerable public health regions. Additionally, the search strategy was limited to articles published in English and Portuguese, which may have excluded relevant studies available in other languages. This linguistic restriction represents a potential limitation in capturing the full scope of global research on the topic.
This methodological approach aims to ensure greater transparency in the selection process and strengthen the evidence base used to discuss the applications of deep learning models in cervical cancer screening in vulnerable regions.

3. Cervical Cancer in Vulnerable Public Health Regions

The purpose of this section is to provide readers with a solid foundation on public health challenges, as well as an epidemiological and contextual overview of cervical cancer in vulnerable regions, particularly in the Brazilian Amazon. This contextualization aims to reinforce the relevance of using deep learning technologies in these regions, where access to screening tests is limited, and health disparities are pronounced. The challenges faced by these communities, including difficulties in access, lack of infrastructure, and high mortality rates, are critical factors for understanding the importance of applying AI models as an alternative to overcome local barriers, promoting more accessible and efficient diagnoses.
The estimated number of new cervical cancer cases in Brazil for each year of the 2023–2025 triennium is 17,010, corresponding to an estimated risk of 15.38 cases per 100,000 women. Among women, it is the third most common cancer. Regarding geographic distribution, it is the second most common cancer in the Northern region, with a rate of 20.48 per 100,000 inhabitants [5].
Cervical cancer is the fourth leading cause of cancer death in women, with 342,000 deaths worldwide in 2020. Approximately 90% of new cases and deaths in 2020 occurred in low- and middle-income regions [6]. In terms of mortality in Brazil, cervical cancer was the third leading cause of cancer-related deaths among women in 2021 (4.5/100,000). The Northern region of the country, however, presented a different scenario. In the north, cervical cancer ranks as the leading cause of cancer-related deaths, with a mortality rate of 9.1 per 100,000 women [7].
Cervical cancer, although a significant and prevalent disease in Brazil, is not classified as a Neglected Tropical Disease (NTD). However, it shares certain characteristics with NTDs, particularly in terms of social inequalities and the vulnerability of specific populations.
The persistence of human papillomavirus (HPV) infection, the main etiological agent of cervical cancer, is exacerbated by factors such as limited access to healthcare services, lack of effective screening and vaccination programs, and socioeconomic inequalities. These conditions are common in low-income areas, where healthcare infrastructure is poor and awareness of HPV is limited. Consequently, cervical cancer remains one of the leading causes of cancer-related mortality among women in low- and middle-income countries, reflecting patterns similar to those observed in NTDs [8].
Some researchers suggest that lessons learned from NTDs can serve as a warning for cervical cancer. Globally, cervical cancer is at risk of following a similar path, with stagnation in investment in new therapies and a decline in clinical knowledge. However, unlike NTDs, it is essential to act before cervical cancer reaches this status. Proactive planning is crucial, especially in scenarios where the disease can be eliminated in high-income countries. Without equitable global action, cervical cancer may become a neglected disease, disproportionately affecting low-income women [8]. Therefore, it is imperative that the global health community works collectively towards its elimination and treatment, with the principle of equity as the central foundation of this approach.
In Brazil, cervical cancer is a public health issue that primarily affects women in poorer regions with limited access to healthcare services, especially in rural and peripheral areas. The lack of regular preventive screenings and adequate treatment increases its incidence and mortality. As one of the leading causes of cancer-related deaths globally, including in Brazil and the Amazon region, cervical cancer is a priority for the Brazilian Unified Health System (SUS), which requires scalable, accurate, and accessible diagnostic tools.
In the context of the Amazon, the situation is exacerbated by geographic distance and socioeconomic vulnerability, which hinder access to adequate treatments and continuous screening. According to integrative reviews, public policies focused on equity and expanding access to healthcare services in these regions are necessary, with strategies that take into account the cultural and social particularities of these communities. It is evident that socioeconomic factors continue to contribute to making women more vulnerable to this negative outcome [9].
The importance of cervical cancer control strategies, particularly in more vulnerable regions such as the Brazilian Amazon, must align with the global call from the Director-General of the World Health Organization (WHO) for the elimination of cervical cancer as a public health issue. This call emphasizes the renewed political will to make elimination a reality and urges all stakeholders to unite around this common goal.
The WHO’s global strategy to eliminate cervical cancer by 2030 is a milestone, with ambitious targets to accelerate proven interventions, such as universal health coverage. The initiative aims to transform cervical cancer, often a neglected disease, into a controllable condition. The strategy promotes innovations like new vaccines, screening methods, and the use of AI for diagnosis. Countries are encouraged to invest in research to overcome barriers and ensure the feasibility, equity, and scalability of these interventions, supporting the elimination of cervical cancer as a public health issue [10].
The urgency and need for simple, safe, low-cost, and highly accessible tools to enhance cervical cancer control strategies in Brazil are evident. The application of techniques such as deep learning, which are rapidly evolving, could revolutionize this landscape by offering advanced solutions for the analysis of Pap smears and histopathological biopsies, enabling faster, more accurate diagnoses with broad access for the population.

4. Global Rise of AI

The evolution of AI began with early algorithms based on rules and simple logic, advancing significantly with the emergence of machine learning techniques that enabled the automation of more complex processes. The evolutionary trajectory and historical milestones of AI, from its theoretical foundations to the development of advanced architectures, were essential for enabling practical applications in healthcare. These advances led to the emergence of systems capable of processing and interpreting medical images with precision, driving the progress of digital pathology and, in particular, the automated screening of cervical cancer.
This historical contextualization is essential, as it allows us to communicate with and reach a non-specialist audience, helping them understand that AI is not a recent technology and that its application in medicine goes far beyond the use of popular tools such as today’s chatbots. By valuing the historical development of knowledge, we reinforce AI as a well-established and continuously evolving field.
AI is not a new concept. In 1939, at the onset of World War II, mathematician Alan Turing from Cambridge was recruited by British intelligence at Bletchley Park to decipher the complex Nazi codes, including ‘Enigma’, a cryptographic machine deemed unbreakable. Through his innovative approach, Turing developed the ‘Bombe’, an electromechanical machine that accelerated the decryption of encrypted messages [11]. His contribution not only helped shorten the war but also laid the foundation for the development of computer science and AI.
In this context, there is a timeline that illustrates the key historical milestones that have shaped the development of AI, from Alan Turing’s early contributions in decrypting the Enigma in 1939 to recent advances with deep neural networks, Transformers, and revolutionary modern tools (Figure 1).
In 1950, Turing published the paper “Computing Machinery and Intelligence”, where he proposed the famous “Turing Test” as a way to assess whether a machine can exhibit intelligent behavior indistinguishable from that of a human [12].
However, the term AI was only formally discussed in 1956 during the Dartmouth Conference, which is widely recognized as the official starting point of AI as a research field. The idea, however, was proposed a year earlier, in 1955, by John McCarthy, a young mathematics professor at the time. McCarthy coined the term “Artificial Intelligence” in his proposal to bring together a group of scientists at Dartmouth College to explore the idea of machines capable of thinking and learning like humans [13].
This event marked the birth of AI as an academic and scientific discipline, paving the way for the advancements we see today in various fields, including healthcare, robotics, and machine learning.
At the end of the 1950s, the field was dominated by attempts to understand and replicate human cognition through machines, but the models were limited. In response to this, Frank Rosenblatt developed the Perceptron in 1958, marking one of the first practical AI approaches based on neural networks. Rosenblatt’s work describes the Perceptron as a mathematical model inspired by the functioning of biological neurons (Figure 2). This model is considered the foundation for the development of artificial neural networks [14]. This model was successful in solving simple problems but was limited, as it could not handle non-linearly separable problems. This limitation ultimately slowed down research in neural networks for nearly two decades.
The Hopfield neural network model introduced the concept of “associative memories” in the early 1980s. Hopfield’s study demonstrated the principles of physics to understand how the brain processes information and stores memories. This model helped lay the foundation for neural networks that mimic how biological neurons connect and learn [15].
A milestone in the evolution of deep neural networks was the 1986 study co-authored by Rumelhart, Hinton, and Williams. This work revolutionized machine learning by enabling the efficient training of multilayer networks, primarily through the backpropagation algorithm [16]. This breakthrough allowed AI systems to learn from large volumes of data, making applications such as image recognition and Natural Language Processing (NLP) feasible.
The work of Hopfield and Hinton in the 1980s led them to win the Nobel Prize in Physics in 2024, recognized for their fundamental contributions to the development of artificial neural networks, which serve as the foundation for today’s advances in AI [17]. It is important to note that Hinton himself stated he did not invent backpropagation, giving credit to Rumelhart, who independently developed it after it had been created by researchers from other fields without due recognition. However, it was Hinton who popularized its application.
This allowed networks to start learning complex patterns from large volumes of data, something crucial for applications in medical diagnostics. For example, the ability of convolutional networks (an efficient analysis method that “breaks down” an image or data into smaller pieces, enabling the easier identification of patterns, shapes, and details) to detect patterns in medical images has become a cornerstone for the development of image classification systems for X-rays, CT scans, and more recently, applications in pathology.
The most impactful work on using Convolutional Neural Networks (CNNs) for automatic feature extraction from images introduced an architecture that became a milestone in computer vision. This model won the ImageNet competition in 2012, demonstrating the effectiveness of CNNs in image classification, including medical applications [18]. This breakthrough propelled the use of CNNs in the analysis of biopsies, X-rays, and other medical images, leading to advancements in the diagnosis of cancer, diabetic retinopathy, and lung diseases.
Another fundamental breakthrough in the evolution of AI was the creation of Transformers, a type of neural network architecture that revolutionized NLP and other AI fields. Introduced in 2017 by Vaswani and collaborators in the preprint “Attention is All You Need”, the Transformer model eliminated the need for recurrent networks (designed to handle sequential data or information with temporal order), allowing for more efficient and scalable parallel processing of large data volumes [19].
Vaswani and collaborators’ preprint [19] was pivotal in optimizing neural network models. The major innovation was the introduction of the self-attention mechanism in the architecture, enabling the model to evaluate the relationships between all words in a sequence simultaneously, comparing each word with the others. This eliminated the need to process text sequentially, as recurrent and convolutional networks did. Since Transformers do not depend on the order of words, they can compute everything in parallel, significantly speeding up the processing.
Just as the brain quickly makes connections between words and ideas, Transformers, through self-attention, evaluate all the words in a sentence simultaneously, adjusting their relevance according to the context. This ability to ‘weigh’ words simultaneously makes the Transformer technology similar to how we process language in human thought.
The evolution of AI in healthcare has been driven by advances such as CNNs and, more recently, Transformers. The Transformer architecture enabled greater scalability and efficiency, impacting not only NLP but also being adapted for computer vision with Vision Transformers (ViTs), introduced in a preprint publication. ViTs are now being applied in healthcare to analyze medical images more efficiently and accurately [20].
AI has made significant strides in healthcare, revolutionizing precision medicine. A major milestone was the 2021 work by David Baker and collaborators, which transformed computational protein design, demonstrating the use of neural networks to create proteins with novel functions, opening new horizons for innovative therapies [21]. Another breakthrough was the AlphaFold project, led by Demis Hassabis and John Jumper, which solved the protein-folding puzzle by accurately predicting the 3D structure from amino acid sequences [22].
In recognition of these groundbreaking achievements, David Baker, for his work in computational protein design, and Demis Hassabis and John Jumper, for their advancements with AlphaFold, were awarded the 2024 Nobel Prize in Chemistry [23].
These advancements demonstrate how AI is transforming global healthcare, accelerating the development of tools for precise diagnostics and personalized treatments. They enable advanced applications, such as deep learning in cervical cancer screening, promoting accurate diagnoses and effective treatments. In low-income regions, these technologies democratize access to diagnostics and treatments through accessible and scalable tools, such as AI-assisted diagnostic systems that operate remotely and support areas with a shortage of healthcare professionals.

5. Applications of Deep Learning in Cervical Cancer Screening

Cervical cytology screening and subsequent treatment have significantly reduced the incidence and mortality of cervical cancer in developed regions. However, resource-limited areas still face challenges due to restricted access. AI technologies are emerging as a promising solution to overcome these barriers, with the potential to improve early detection and predict the progression of cervical cancer, offering an effective alternative to eliminate the disease in regions with less access to healthcare [24]. There are already several studies suggesting that AI can perform as well as, or even better than, humans in essential healthcare tasks, such as disease diagnosis.
In recent years, deep learning has achieved remarkable success in image classification. This has driven research in medical AI, particularly in specialties such as radiology, pathology, gastroenterology, and ophthalmology, which rely on image interpretation. Furthermore, these models can learn from various types of data, including numbers, text, and combinations of different input types, known as multimodal AI [25].
AI has revolutionized cervical cancer screening and diagnosis, optimizing processes such as cell nucleus segmentation, feature extraction, and cytological sample classification. Algorithms like K-nearest-neighbors (KNN) and Support Vector Machines (SVMs) have achieved up to 93.78% precision, while ensemble classifiers, which combine different algorithms (LSSVM, MLP, RF), have reached up to 98.11%. In diagnosis, deep learning models integrating biomarkers and genetic panels show high precision (97.96%), while techniques like Laser-Induced Breakdown Spectroscopy (LIBS) combined with Principal Component Analysis (PCA) and SVM have the potential for real-time diagnostics with 94.4% precision [24].
Deep learning technologies have played a fundamental role in enhancing cervical cancer screening, offering innovative solutions to persistent challenges in the early and accurate detection of the disease. Among the main architectures used, Recurrent Neural Networks (RNNs), CNNs, and ViTs stand out.
RNNs have demonstrated effectiveness in the temporal analysis of sequential data, such as the evolution of lesions over time, providing valuable insights into disease progression in colposcopic examinations [26]. CNNs, in turn, are widely used in cytological image analysis due to their ability to automatically extract relevant features without the need for manual intervention, enabling early detection of lesions with high accuracy compared to traditional methods [27]. Furthermore, ViTs introduce self-attention mechanisms that capture spatial and contextual relationships in high-resolution images, increasing the accuracy of screening models [28].
In a recent scientific study, traditional machine learning models were directly compared to a deep network in the analysis of cervical exams for cancer detection. For instance, a group of researchers evaluated classical algorithms (such as SVM, Random Forest, and XGBoost) against a deep learning model (ResNet-50, a CNN) for identifying cervical lesions in clinical images. The deep learning model showed significantly superior performance: ResNet-50 achieved an AUC of 0.97, while traditional models reached AUCs around 0.79–0.84 (average ~0.82). Additionally, traditional methods required manual feature engineering—prior selection of relevant characteristics (e.g., texture, morphology) performed by experts—whereas the deep neural network automatically learned discriminative features from raw data, without human assumptions. As a result, the deep learning approach not only achieved higher accuracy but also eliminated the need for manual feature extraction and proved to be scalable to large datasets, highlighting its potential for large-scale screening and automated diagnosis of cervical cancer [29].
Although deep learning models have demonstrated promising metrics, such as high sensitivity and AUC in controlled experiments, translating these results into measurable clinical gains and public health improvements remains a challenge. The generalization of these models to vulnerable populations, such as the riverside and indigenous communities in the Amazon, depends not only on the robustness of the algorithms but also on their integration with existing clinical workflows. In this regard, the supervision of healthcare professionals is essential to ensure the proper validation and interpretation of the results. Furthermore, adequate training and user acceptance are critical factors for operationalizing these technologies in regions with limited infrastructure. Interprofessional collaboration, involving technicians, nurses, and physicians, is crucial to maximize the impact of deep learning models and contribute to strengthening public health strategies in highly vulnerable areas.
With the advent of powerful computers available today, language models now incorporate hundreds of billions of parameters, which can be employed to generate new textual content. This capability, combined with a virtually infinite amount of data accessible on the internet for training, suggests that language models are becoming increasingly sophisticated, as exemplified by the Chat Generative Pre-trained Transformer, or ChatGPT, released by OpenAI [30].
There are few research studies on the application of deep learning for classification, detection, or segmentation tasks to detect cervical cancer. As reported by Vargas-Cardona and colleagues [31], whose study aimed to describe and synthesize the literature on AI accuracy in the early diagnosis of cervical cancer, the authors concluded that the use of AI for cervical cancer screening has increased over the years, and some results are very promising. However, they emphasize that further research is needed to validate these findings.
Since the first AI-enabled medical device received FDA approval in 1995 for cervical slide interpretation (PAPNET Testing System), a total of 950 approvals have been granted for AI and machine learning-enabled medical devices. Interest in medical devices incorporating AI functionality has increased in recent years due to the development of large language models such as ChatGPT, Gemini, Llama, Claude, PaLM, etc. However, in the list of authorized devices provided by the FDA, only seven approved devices are in Pathology, representing just 0.7%. Among them, the Paige Prostate program, a computer software that aids in diagnosing suspicious prostate cancer biopsies, is approved for commercialization [32].
Generating insights on the use of deep learning tools is essential in pathology, given that cervical cancer is one of the leading causes of cancer mortality, including in Brazil and the Amazon region. Reviews like this help meet the criteria for implementing digital technologies by addressing a priority problem. This enables these solutions to be scalable, transparent, and ethical.
The application of deep learning techniques in cervical cancer diagnosis represents a significant advancement in early detection and, consequently, in the effective prevention of this pathology, promoting substantial improvements in public health. The benefits include reduced time consumption, decreased need for highly qualified technical professionals, and the elimination of biases due to subjective factors [33].
With the aim of optimizing the quality in reporting diagnostic accuracy studies, analyzing AI interventions, and improving the transparency and reproducibility of models in clinical contexts, we recommend guidelines such as STARD 2015, the CONSORT-AI Extension, and the MI-CLAIM checklist [34,35,36].
The MI-CLAIM [33] checklist aims to improve transparency in the documentation of AI models applied to medicine, focusing on the assessment of clinical impact and the replicability of the technical process. It covers six parts: study design, data partitioning, model optimization and selection, performance evaluation, model examination, and reproducible pipeline. These guidelines ensure that AI studies are clear, fair, and replicable in different clinical contexts (Figure 3).
The application of deep learning in cervical cancer screening in the Amazon region lacks critical insights to address regional challenges, such as geographic dispersion and limited access to specialized healthcare services. High-quality data collection, incorporating local characteristics, is essential for the success of AI models. Model development and training should be guided by computer vision and NLP techniques, with rigorous validation to ensure accuracy across different clinical scenarios in the region.
The development of a deep learning tool focused on the Amazon population can be optimized with a structured workflow for applying computer vision techniques, prioritizing data quality and model efficiency. The first step involves excluding low-quality images and preprocessing, including conversion to tensors and normalization. Organizing these images into distinct classes improves categorization for training. After splitting into training, validation, and test data, the model is fine-tuned and improved based on evaluation metrics (Figure 4).
This process aims not only to improve diagnostic accuracy but also to ensure that the tool is adapted to the reality of more isolated populations with infrastructure challenges, such as in the Amazon, where access to screening exams is limited. Optimizing this workflow can accelerate the development of more accessible and effective solutions for the region.
In this sense, the flowchart serves a didactic and strategic role by providing a simplified visual representation of the essential steps involved in applying computer vision to digital pathology. It is an introductory resource aimed at guiding professionals and researchers, especially in contexts with limited technical familiarity, such as in the Amazon region, where technical and operational constraints are common. By serving as an initial roadmap, it contributes to the understanding and replication of the process, reinforcing the article’s commitment to practical applicability as an opportunity to explore such developments.
AI applications for cervical cancer screening in resource-limited regions have the potential to significantly improve disease detection and treatment. Methods such as digital cervicography combined with machine learning algorithms have shown higher accuracy in identifying cancerous lesions compared to traditional methods. The use of AI can enhance the sensitivity and specificity of screenings. Government investments are crucial to expand and establish organized cervical screening programs, supported by the best available scientific evidence and advanced AI technologies [24].
A study in Hubei, China, compared the use of AI with traditional cytological methods in cervical cancer screening, showing significant advantages in efficiency and accuracy. Integrated into screening, AI increased sensitivity for detecting high-grade lesions (CIN2+) without compromising specificity, surpassing manual reading in some cases. The AI system classified most normal samples with high concordance with cytologists, reducing analysis time and allowing professionals to focus on critical samples. AI also improved the management of ASC-US cases, reducing excessive referrals to colposcopy. These results demonstrate AI’s potential to optimize traditional methods, especially in large-scale screenings and areas with few specialists [37].
A study developed and validated an AI-based cervical cancer screening system (AICCS) for cytology classification. The AICCS was trained and validated with data from 16,056 participants in retrospective, prospective, and randomized observational studies, using two AI models: one for detecting cells in patches and another for classifying whole slide images (WSIs). The system demonstrated high accuracy, with an AUC of 0.947, sensitivity of 0.946, specificity of 0.890, and precision of 0.892 in prospective evaluations. Additionally, cytopathologists assisted by the AICCS had a 13.3% increase in sensitivity, underscoring its potential for more efficient cervical cancer screening [38].
AI applications, especially deep learning models, offer significant benefits, such as greater diagnostic accuracy, reduced analysis time, and accessibility in remote areas. In the Amazon, these models can be adapted to address the shortage of trained professionals and limited infrastructure. Implementation in public policies, partnerships with the SUS, and health programs could strengthen early diagnosis and screening, improving healthcare access for vulnerable populations. However, despite these opportunities, including increased accuracy and accessibility, there are also challenges and limitations that need to be considered.

6. Limitations and Challenges

The implementation of Pap smear tests in rural areas faces several barriers, especially in riverside communities. Among these, organizational difficulties stand out, causing delays in exams and resulting in many unsatisfactory samples. The lack of regular public transportation and limited financial resources worsens the situation, forcing women to walk long distances or rely on school transportation to access healthcare services [39]. The limitations of applying deep learning models in cervical cancer screening in vulnerable regions, such as the Amazon, are also linked to sociocultural and structural barriers. The lack of technological infrastructure is a major obstacle, especially in remote areas with limited connectivity and scarce adequate equipment.
Another limitation is the scarcity of labeled data, limited or nonexistent connectivity in isolated communities, restricted availability of computational infrastructure, and the lack of high-quality cytology samples that are representative of the region to support model training. These factors hinder the effective implementation of AI systems in such regions. As alternatives, models capable of offline inference and the use of open-source platforms with low infrastructure dependency are potential solutions to address these challenges.
There are significant challenges to be faced for the practical implementation of these models. The need for large volumes of high-quality data for training, the dependence on advanced computational infrastructure, and generalization biases, especially when applied to distinct populations without local adaptation, represent important limitations. Additionally, the integration of these models into clinical practice depends on the acceptance of healthcare professionals and the incorporation of adequate training to ensure the validation and supervision of the obtained results.
Faced with these challenges, a study presents CCanNet, a lightweight deep learning model developed for the efficient detection of different types of cervical cancer. Using the public SIPaKMeD dataset, CCanNet achieved an accuracy of 98.53% with only 1.27 million parameters, outperforming traditional, heavier models. The architecture of CCanNet incorporates squeeze blocks, residual blocks, and skip connections, enabling efficient feature extraction with low computational complexity [40]. This approach makes the model particularly suitable for implementation on resource-constrained devices, such as smartphones or low-cost computers commonly found in remote areas. The lightweight nature and high accuracy of CCanNet make it a promising solution for cervical cancer screening in regions with limited technological infrastructure. Its ability to operate efficiently on low-performance devices enables examinations to be carried out in locations where access to computational and medical equipment is restricted, such as riverside communities in the Amazon.
It is important that the implementation of AI-based solutions is aligned with robust ethical principles, especially when applied in public health contexts. The adoption of innovative technologies for cervical cancer screening in areas with limited infrastructure, such as riverside communities in the Amazon, must be accompanied by a strong commitment to ethical research practices and patient care. In this sense, it is essential that such initiatives comply with international guidelines, such as the Declaration of Helsinki [41], ensuring that the application of AI respects individual rights, promotes safety and well-being, and guarantees the free and informed consent of the users involved.
Ethics in the use of AI should involve transparency in algorithm development. Biases that could negatively impact vulnerable populations must be avoided, thereby ensuring that solutions benefit all communities equitably, while respecting their cultural and social particularities.
The development of the application should also follow WHO ethical principles to ensure the protection of human autonomy, well-being, public safety, transparency, intelligibility, integrity, inclusion, equity, responsibility, and sustainable AI [42]. As a driver for sustainable innovation, ensuring rights, best practices, and risk assessment, the development of the solution—especially concerning the population of the Brazilian Amazon—should be guided by the guidelines of Bill 2338/2023, currently under review by the Internal Temporary Commission on Artificial Intelligence (CTIA) of the Brazilian Federal Senate [43].
For the development of AI solutions intended for medical applications in Brazil, the guidelines established by ANVISA’s Collegiate Board Resolution (RDC) No. 657/2022 must be rigorously followed [44], which regulates Software as a Medical Device (SaMD). This involves conducting a detailed clinical evaluation to ensure the software’s safety, efficacy, and performance according to its intended purpose, as required by the RDC.
Another legislation to comply with is that applications must follow data privacy regulations, including the General Data Protection Law (LGPD), ensuring strict privacy and security measures, authorized access control, and the protection of user identity [45].
For privacy, it must be ensured that patients’ personal and sensitive data are protected and anonymized before being used to train AI models. This is in compliance with data privacy regulations such as the GDPR (General Data Protection Regulation) in Europe [46], the HIPAA Privacy Rule (Health Insurance Portability and Accountability Act, which provides data confidentiality and security provisions to protect medical information) in the United States [47], and the LGPD in Brazil.
To prevent bias, medical AI must be developed with transparency, ensuring equitable treatment across different groups and preventing health disparities. Techniques like class balancing, sensitive variable equalization, and demographic analyses are essential to address under-representation. AI needs to be accessible to all, regardless of location or socioeconomic status, with solutions suitable for low-resource settings, including mobile platforms and, if possible, offline functionality. It is crucial to involve diverse stakeholders and underrepresented groups in AI development. The technology should complement the doctor–patient relationship, focusing on shared decision making and automating repetitive tasks, prioritizing human-centered care and building trust.
Finally, ensuring the implementation of sustainable AI should be based on energy efficiency, economic cost-effectiveness, and, when applicable, establishing partnerships with organizations that promote sustainable practices. The focus is on implementing algorithms and AI infrastructures that are energy-efficient, using low-energy consumption technologies and optimizing code for operational efficiency.

7. Current Market and Research Directions

From prototype to market, what is currently being developed or already available? By exploring deep learning technologies and models being validated for research purposes or commercially available in the healthcare sector, we observe tools such as automated cervical image analysis systems, applications for tumor identification based on gene expression, prostate image classification, and software utilizing CNNs, which are revolutionizing cervical cancer screening and early diagnosis, among others. These solutions, which have already begun to be applied in various markets, offer greater accuracy, efficiency, and accessibility compared to traditional methods. Table 1 highlights some of the main commercial applications available and in development, providing an overview of technologies that are shaping the future with the rise of AI in global health.
The capacity for adaptation and open innovation has driven the development of technologies that go beyond academic research, reaching commercial applications in healthcare. As a future direction, Open AI has established itself as a fundamental pillar, for example, for the advancement of computer vision. Transparency and collaborative access to models allow for the creation of more robust and adaptable solutions.
However, challenges persist in areas such as security, privacy, and model interoperability. Security and privacy issues are particularly critical in open-source models, exposing vulnerabilities to adversarial attacks and risks of sensitive data leakage [59]. In this context, exploring challenges and research questions in Open AI, as well as identifying future directions to improve security, interpretability, and scalability of models, becomes essential to strengthen the applicability of these technologies in real-world scenarios, such as diagnostic solutions in vulnerable regions.

8. Conclusions

A recent systematic review reinforces the superiority of deep learning models over traditional machine learning approaches in the analysis of cervical cytology images. While techniques such as SVM and Random Forest show limitations, models based on CNNs, U-Net, and EfficientNet stand out for their ability to automatically extract relevant patterns and provide superior accuracy, even in data-limited contexts. Architectures like ResNet-50, InceptionV3, and EfficientNet-B3 have been widely employed in classification, segmentation, and detection tasks in cytology, with reported accuracy exceeding 96% in some cases, demonstrating the potential to overcome challenges related to scalability, diagnostic precision, and standardization in cervical cancer screening in low-resource settings [27].
AI applications in public health in the Amazon, especially in areas with high rates of cervical cancer, are essential for addressing the region’s unique challenges, such as the increased risk of infectious diseases like HPV, due to biodiversity and tropical climate. Cervical cancer poses a serious threat to women’s health, worsened by limited access to healthcare services. Technologies like chatbots and computer vision can enhance screening and diagnosis, increasing precision and efficiency, providing a more comfortable experience for patients, improving adherence to screening programs, offering emotional support, and contributing to precision medicine and disease control in the region.
The future of pathology is digital, and the application of AI is revolutionizing this field, especially in the context of cancer diagnosis. The digitization of slides and the use of WSIs allow for a more accurate and efficient analysis of samples, facilitating the detection of morphological patterns and biomarkers essential for precise diagnoses. The implementation of deep learning tools significantly improves diagnostic accuracy, reduces inter-observer variability, and accelerates the diagnostic process, making it more reproducible and standardized [60].
The next generation of AI in cervical cancer screening has the potential to significantly improve early detection accuracy, reduce diagnostic variability, and expand access in underserved regions. Innovations such as multimodal learning, real-time decision support, and hybrid models with ViTs can optimize diagnostic workflows, making screenings faster and more reliable. These advancements could reduce health disparities in low-resource areas, democratizing preventive care.
AI technologies are particularly promising for cervical cancer, and integrating deep learning into digital pathology is a crucial innovation that can transform cancer diagnosis and prevention, improving patient outcomes and healthcare system efficiency. These technologies can be applied in low-resource healthcare systems, supporting screening and the early identification of precancerous lesions and cancer in vulnerable regions. Implementing these tools has the potential to transform public health by promoting early diagnosis, effective prevention, social justice, and equity in healthcare access. With digital technology and AI, pathology is ready for the future, automating tasks, increasing diagnostic precision, and expanding access to quality medical care, even in remote areas.

Author Contributions

R.C.d.L. and J.A.S.Q. conceptualized and generated the review idea, developed the research protocol, and reviewed and approved it prior to submission for publication. R.C.d.L. conducted and executed the searches, prepared the figures and tables, formally analyzed the articles, extracted the data, and drafted the original manuscript. J.A.S.Q. supervised and reviewed the manuscript draft. All authors reviewed the manuscript at least twice before submission. All authors have read and agreed to the published version of the manuscript.

Funding

The authors declare that they received financial support for the publication of this article. This article was funded by the PAPQ—Qualified Publication Support Program of UFPA. The funder played no role in the study design, data collection, data analysis, data interpretation, or the writing of this manuscript. The views expressed are those of the authors and do not necessarily reflect the ideas of the funding institutions.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the Federal University of Pará (UFPA) and the Postgraduate Program in Tropical Diseases (PPGDT) for their institutional support and the opportunity to develop this research. This study reflects the commitment of the UFPA and PPGDT to promote high-relevance scientific knowledge for public health, especially in the Amazon region.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Novais, I.R.; Coelho, C.A.; Machado, H.C.; Surita, F.; Zeferino, L.C.; Vale, D.B. Cervical cancer screening in Brazilian Amazon Indigenous women: Towards the intensification of public policies for prevention. PLoS ONE 2023, 18, e0294956. [Google Scholar] [CrossRef] [PubMed]
  2. Silveira, R.P.; Pinheiro, R. Entendendo a Necessidade de Médicos no Interior da Amazônia-Brasil. Rev. Bras. Educ. Med. 2014, 38, 451–459. [Google Scholar] [CrossRef]
  3. Bruni, L.; Serrano, B.; Roura, E.; Alemany, L.; Cowan, M.; Herrero, R.; Poljak, P.F.; Murillo, R.; Broutet, N.; Riley, L.M.; et al. Cervical cancer screening programmes and age-specific coverage estimates for 202 countries and territories worldwide: A review and synthetic analysis. Lancet Glob. Health 2022, 10, e1115–e1127. [Google Scholar] [CrossRef]
  4. Fatahi Meybodi, N.; Karimi-Zarchi, M.; Allahqoli, L.; Sekhavat, L.; Gitas, G.; Rahmani, A.; Fallahi, A.; Hassanlouei, B.; Alkatout, I. Accuracy of the Triple Test Versus Colposcopy for the Diagnosis of Premalignant and Malignant Cervical Lesions. Asian Pac. J. Cancer Prev. 2020, 21, 3501–3507. [Google Scholar] [CrossRef]
  5. Instituto Nacional de câncer (INCA). Estimativa 2023: Incidência de Câncer No Brasil. 2022. Available online: https://www.inca.gov.br/sites/ufu.sti.inca.local/files//media/document//estimativa-2023.pdf (accessed on 3 December 2024).
  6. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef]
  7. Instituto Nacional de câncer (INCA). Atlas da Mortalidade. 2022. Available online: https://www.inca.gov.br/app/mortalidade (accessed on 3 December 2024).
  8. Knaul, F.M.; Rodriguez, N.M.; Arreola-Ornelas, H.; Olson, J.R. Cervical cancer: Lessons learned from neglected tropical diseases. Lancet Glob. Health 2019, 7, e299–e300. [Google Scholar] [CrossRef]
  9. Gomes, L.C.; Pinto, M.C.; Reis, B.J.; Silva, D.S. Epidemiologia do câncer cervical no Brasil: Uma revisão integrativa. J. Nurs. Health 2022, 12, e2212221749. [Google Scholar]
  10. Simelela, P.N. WHO global strategy to eliminate cervical cancer as a public health problem: An opportunity to make it a disease of the past. Int. J. Gynecol. Obstet. 2021, 152, 1–3. [Google Scholar] [CrossRef]
  11. Hodges, A. Alan Turing: The Enigma; Princeton University Press: Princeton, NJ, USA, 2012. [Google Scholar]
  12. Turing, A.M. Computing Machinery and Intelligence. Mind 1950, 59, 433–460. Available online: https://phil415.pbworks.com/f/TuringComputing.pdf (accessed on 3 December 2024). [CrossRef]
  13. McCarthy, J.; Minsky, M.; Rochester, N.; Shannon, C. A proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Stanford University. 1955. Available online: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html (accessed on 3 December 2024).
  14. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef]
  15. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed]
  16. Rumelhart, D.; Hinton, G.; Williams, R. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  17. Nobel Prize in Physics. Foundational Discoveries in Machine Learning and Artificial Neural Networks. 2024. Available online: https://www.nobelprize.org/prizes/physics/2024/prize-announcement/ (accessed on 3 December 2024).
  18. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  19. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomes, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
  20. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2021, arXiv:2010.11929. [Google Scholar] [CrossRef]
  21. Hiranuma, N.; Park, H.; Baek, M.; Anishchenko, I.; Dauparas, J.; Baker, D. Improved protein structure refinement guided by deep learning based accuracy estimation. Nat. Commun. 2021, 12, 1340. [Google Scholar] [CrossRef]
  22. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef]
  23. Nobel Prize in Chemistry. Computational Protein Design and Protein Structure Prediction. 2024. Available online: https://www.nobelprize.org/prizes/chemistry/2024/press-release/ (accessed on 3 December 2024).
  24. Menezes, L.J.; Vazquez, L.; Mohan, C.K.; Somboonwit, C. Eliminating Cervical Cancer: A Role for Artificial Intelligence. In Global Virology III: Virology in the 21st Century; Shapshak, P., Balaji, S., Kangueane, P., Chiappelli, F., Somboonwit, C., Menezes, L.J., Sinnott, J.T., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 405–422. [Google Scholar] [CrossRef]
  25. Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in health and medicine. Nat. Med. 2022, 28, 31–38. [Google Scholar] [CrossRef]
  26. Tandon, R.; Agrawal, S.; Rathore, N.P.S.; Mishra, A.K.; Jain, S.K. A systematic review on deep learning-based automated cancer diagnosis models. J. Cell. Mol. Med. 2024, 28, e18144. [Google Scholar] [CrossRef]
  27. Sarhangi, H.A.; Beigifard, D.; Farmani, E.; Bolhasani, H. Deep learning techniques for cervical cancer diagnosis based on pathology and colposcopy images. Inform. Med. Unlocked 2024, 47, 101503. [Google Scholar] [CrossRef]
  28. Sha, Y.; Zhang, Q.; Zhai, X.; Hou, M.; Lu, J.; Meng, W.; Wang, Y.; Li, K.; Ma, J. CerviFusionNet: A multi-modal, hybrid CNN-transformer-GRU model for enhanced cervical lesion multi-classification. iScience 2024, 27, 111313. [Google Scholar] [CrossRef] [PubMed]
  29. Park, Y.R.; Kim, Y.J.; Ju, W.; Nam, K.; Kim, S.; Kim, K.G. Comparison of machine and deep learning for the classification of cervical cancer based on cervicography images. Sci. Rep. 2021, 11, 16143. [Google Scholar] [CrossRef]
  30. Charlotte, J.H.; Jeffrey, M.D. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023. N. Engl. J. Med. 2023, 388, 1201–1208. [Google Scholar] [CrossRef]
  31. Vargas-Cardona, H.D.; Rodriguez-Lopez, M.; Arrivillaga, M.; Vergara-Sanchez, C.; García-Cifuentes, J.P.; Bermúdez, P.C.; Jaramillo-Botero, A. Artificial intelligence for cervical cancer screening: Scoping review, 2009–2022. Int. J. Gynecol. Obstet. 2024, 165, 566–578. [Google Scholar] [CrossRef]
  32. FDA (Food and Drug Administration). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. 2024. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices (accessed on 3 December 2024).
  33. Hou, X.; Shen, G.; Zhou, L.; Li, Y.; Wang, T.; Ma, X. Artificial Intelligence in Cervical Cancer Screening and Diagnosis. Front. Oncol. 2022, 12, 851367. [Google Scholar] [CrossRef]
  34. Bossuyt, P.M.; Reitsma, J.B.; Bruns, D.E.; Gatsonis, C.A.; Glasziou, P.P.; Irwig, L.; Lijmer, J.G.; Moher, D.; Rennie, D.; de Vet, H.C.W.; et al. STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies. BMJ 2015, 351, h5527. [Google Scholar] [CrossRef]
  35. Liu, X.; Rivera, S.C.; Moher, D.; Calvert, M.J.; Denniston, A.K. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT-AI Extension. BMJ 2020, 370, m3164. [Google Scholar] [CrossRef]
  36. Norgeot, B.; Quer, G.; Beaulieu-Jones, B.K.; Torkamani, A.; Dias, R.; Gianfrancesco, M.; Arnaout, R.; Kohane, I.S.; Saria, S.; Topol, E.; et al. Minimum information about clinical artificial intelligence modeling: The MI-CLAIM checklist. Nat. Med. 2020, 26, 1320–1324. [Google Scholar] [CrossRef]
  37. Turic, B.; Sun, X.; Wang, J.; Pang, B. The Role of AI in Cervical Cancer Screening. Cervical Cancer—A Global Public Health Treatise; IntechOpen: London, UK, 2021. [Google Scholar] [CrossRef]
  38. Wang, J.; Yu, Y.; Tan, Y.; Wan, H.; Zheng, N.; He, Z.; Mao, L.; Ren, W.; Chen, K.; Lin, Z.; et al. Artificial intelligence enables precision diagnosis of cervical cytology grades and cervical cancer. Nat. Commun. 2024, 15, 4369. [Google Scholar] [CrossRef]
  39. da Silva, D.C.B.; Garnelo, L.; Herkrath, F.J. Barriers to Access the Pap Smear Test for Cervical Cancer Screening in Rural Riverside Populations Covered by a Fluvial Primary Healthcare Team in the Amazon. Int. J. Environ. Res. Public Health 2022, 19, 4193. [Google Scholar] [CrossRef]
  40. Mehedi, M.H.K.; Khandaker, M.; Ara, S.; Alam, M.A.; Mridha, M.F.; Aung, Z. A lightweight deep learning method to identify different types of cervical cancer. Sci. Rep. 2024, 14, 29446. [Google Scholar] [CrossRef] [PubMed]
  41. World Medical Association. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. Adopted by the 18th WMA General Assembly, Helsinki, Finland, June 1964, and Amended by Subsequent General Assemblies, Tokyo 1975, Venice 1983, Hong Kong 1989, Somerset West 1996, Edinburgh 2000. Available online: https://history.nih.gov/download/attachments/1016866/helsinki.pdf (accessed on 2 December 2024).
  42. WHO (World Health Organization). Ethics and Governance of Artificial Intelligence for Health. 2021. Available online: https://www.who.int/publications/i/item/9789240029200 (accessed on 2 December 2024).
  43. Senado Federal. Projeto de Lei n° 2338, de 2023. 2023. Available online: https://www25.senado.leg.br/web/atividade/materias/-/materia/157233 (accessed on 2 December 2024).
  44. Agência Nacional de Vigilância Sanitária (ANVISA). Resolução da Diretoria Colegiada—RDC n° 657, de 24 de Março de 2022. Dispõe Sobre a Regularização de Software como Dispositivo Médico (Software as a Medical Device—SaMD). 2022. Available online: https://in.gov.br/en/web/dou/-/resolucao-de-diretoria-colegiada-rdc-n-657-de-24-de-marco-de-2022-389603457 (accessed on 2 December 2024).
  45. Brasil. Lei n° 13.709, de 14 de Agosto de 2018. Lei Geral de Proteção de Dados Pessoais—LGPD. 2018. Available online: https://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/l13709.htm (accessed on 2 December 2024).
  46. European Union. Regulation (EU) 2016/679. General Data Protection Regulation—GDPR. Official Journal of the European Union L 119/1, 4 May 2016. Available online: https://gdpr-info.eu/ (accessed on 2 December 2024).
  47. United States. Department of Health and Human Services. Health Information Privacy (HIPAA for Professionals). 2024. Available online: https://www.hhs.gov/hipaa/for-professionals/index.html (accessed on 2 December 2024).
  48. Mango, L.J. Computer-assisted cervical cancer screening using neural networks. Cancer Lett. 1994, 77, 155–162. [Google Scholar] [CrossRef] [PubMed]
  49. Monzon, F.A.; Lyons-Weiler, M.; Buturovic, L.J.; Rigl, C.T.; Henner, W.D.; Sciulli, C.; Dumur, C.I.; Medeiros, F.; Anderson, G.G. Multicenter Validation of a 1,550-Gene Expression Profile for Identification of Tumor Tissue of Origin. J. Clin. Oncol. 2009, 27, 2503–2508. [Google Scholar] [CrossRef] [PubMed]
  50. Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Silva, V.W.K.; Busam, K.J.; Brogi, E.; Reuter, V.E.; Klimstra, D.S.; Fuchs, T.J. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 2019, 25, 1301–1309. [Google Scholar] [CrossRef]
  51. Raciti, P.; Sue, J.; Ceballos, R.; Godrich, R.; Kunz, J.D.; Kapur, S.; Reuter, V.; Grady, L.; Kanan, C.; Klimstra, D.S.; et al. Novel artificial intelligence system increases the detection of prostate cancer in whole slide images of core needle biopsies. Mod. Pathol. 2020, 33, 2058–2066. [Google Scholar] [CrossRef]
  52. da Silva, L.M.; Pereira, E.M.; Salles, P.G.O.; Godrich, R.; Ceballos, R.; Kunz, J.D.; Casson, A.; Viret, J.; Chandarlapaty, S.; Ferreira, C.G.; et al. Independent real-world application of a clinical-grade automated prostate cancer detection system. J. Pathol. 2021, 254, 147–158. [Google Scholar] [CrossRef]
  53. Raciti, P.; Sue, J.; Retamero, J.A.; Ceballos, R.; Godrich, R.; Kunz, J.D.; Casson, A.; Thiagarajan, D.; Ebrahimzadeh, Z.; Viret, J.; et al. Clinical validation of artificial intelligence-augmented pathology diagnosis demonstrates significant gains in diagnostic accuracy in prostate cancer detection. Arch. Pathol. Lab. Med. 2023, 147, 1178–1185. [Google Scholar] [CrossRef]
  54. Nakisige, C.; de Fouw, M.; Kabukye, J.; Sultanov, M.; Nazrui, N.; Rahman, A.; de Zeeuw, J.; Koot, J.; Rao, A.P.; Prasad, K.; et al. Artificial intelligence and visual inspection in cervical cancer screening. Int. J. Gynecol. Cancer 2023, 33, 1515–1521. [Google Scholar] [CrossRef]
  55. Lu, M.Y.; Chen, B.; Williamson, D.F.K.; Chen, R.J.; Zhao, M.; Chow, A.K.; Ikemura, K.; Kim, A.; Pouli, D.; Patel, A.; et al. A multimodal generative AI copilot for human pathology. Nature 2024, 634, 466–473. [Google Scholar] [CrossRef]
  56. Abinaya, K.; Sivakumar, B. A Deep Learning-Based Approach for Cervical Cancer Classification Using 3D CNN and Vision Transformer. J. Digit. Imaging Inform. Med. 2024, 37, 280–296. [Google Scholar] [CrossRef]
  57. Zhao, T.; Gu, Y.; Yang, J.; Usuyama, N.; Lee, H.H.; Naumann, T.; Gao, J.; Crabtree, A.; Abel, J.; Moung-Wen, C.; et al. BiomedParse: A biomedical foundation model for image parsing of everything everywhere all at once. arXiv 2024, arXiv:2405.12971. [Google Scholar] [CrossRef]
  58. Codella, N.C.F.; Jin, Y.; Jain, S.; Gu, Y.; Lee, H.H.; Ben Abacha, A.; Santamaria-Pang, A.; Guyman, W.; Sangani, N.; Zhang, S.; et al. MedImageInsight: An open-source embedding model for general domain medical imagin. arXiv 2024, arXiv:2410.06542. [Google Scholar] [CrossRef]
  59. Al-Kharusi, Y.; Khan, A.; Rizwan, M.; Bait-Suwailam, M.M. Open-Source Artificial Intelligence Privacy and Security: A Review. Computers 2024, 13, 311. [Google Scholar] [CrossRef]
  60. Pallua, J.D.; Brunner, A.; Zelger, B.; Schirmer, M.; Haybaeck, J. The future of pathology is digital. Pathol. Res. Pract. 2020, 216, 153040. [Google Scholar] [CrossRef]
Figure 1. Timeline of the historical evolution and global rise of AI.
Figure 1. Timeline of the historical evolution and global rise of AI.
Computers 14 00202 g001
Figure 2. Comparison between biological neuron and artificial neuron (perceptron). Source: adapted from Anthony C. Chang, Chapter 2—History of Artificial Intelligence, in: Anthony C. Chang (Ed.), Intelligence-Based Medicine, Academic Press, 2020, pp. 23–27. https://doi.org/10.1016/B978-0-12-823337-5.00002-0.
Figure 2. Comparison between biological neuron and artificial neuron (perceptron). Source: adapted from Anthony C. Chang, Chapter 2—History of Artificial Intelligence, in: Anthony C. Chang (Ed.), Intelligence-Based Medicine, Academic Press, 2020, pp. 23–27. https://doi.org/10.1016/B978-0-12-823337-5.00002-0.
Computers 14 00202 g002
Figure 3. Overview of the MI-CLAIM guidelines for clinical AI modeling.
Figure 3. Overview of the MI-CLAIM guidelines for clinical AI modeling.
Computers 14 00202 g003
Figure 4. Workflow for the application of computer vision technique.
Figure 4. Workflow for the application of computer vision technique.
Computers 14 00202 g004
Table 1. Main commercial and research-focused AI applications for global health.
Table 1. Main commercial and research-focused AI applications for global health.
Application/ProductFunctionalityModel TypeSample (n)Performance MetricStatusReference
PAPNET Cytological Screening SystemComputer-assisted system for screening conventional cervical smears. The system uses neural networks to locate and recognize potentially abnormal cells and display them for review by cytologists.Feedforward neural networks with backpropagation architectureRilke et al. (200)


Kish et al. (191)


Slagel et al. (208)


Kharazi et al. (357)
Sensitivity: 96%
Triage efficiency: 81%

Sensitivity: 98%
Triage efficiency: 97%

Sensitivity: 100%
Triage efficiency: 85%

Sensitivity: 95%
Triage efficiency: 58%
FDA approvedMango LJ [48]
Pathwork Tissue of Origin TestGene expression profile-based test to identify the tissue of origin of tumors, particularly metastatic and poorly differentiated or undifferentiated primary tumors.Machine learning algorithm with a cross-validation system547 (258 metastatic and 289 undifferentiated primary)Sensitivity: 87.8%
Especificity: 99.4%
For metastatic tumors: 84.5%
For primary tumors: 90.7%
FDA approvedMonzon et al. [49]
Paige ProstateThe system performs classification of whole slide images (WSIs) to detect prostate cancerThe model combines Multiple Instance Learning (MIL) and Convolutional Neural Networks (CNNs—ResNet34) along with Recurrent Neural Networks (RNNs)24,859 WSIs



6644 WSIs and validated on 232 WSIs of needle biopsies

600 transrectal ultrasound-guided prostate needle core biopsy regions from 100 patients

610 prostate needle biopsies, originating from 218 institutions
Area under the ROC curve (AUC): 0.991

Sensitivity: 96%
Especificity: 98%
Pathologists sensitivity: from 74% to 90%

Sensitivity: 0.99
Especificity: 0.93
Negative predictive value (NPV): 1.0
Reduction in diagnostic time: 65.5%

Sensitivity: 97.4%
Especificity: 94.8%
Area under the ROC curve (AUC): 0.99
FDA approvedCampanella et al. [50]


Raciti et al. [51]





da Silva et al. [52]







Raciti et al. [53]
AI Decision Support System for VIAAI-based decision support system for visual inspection of the cervix with acetic acid (VIA) in cervical cancer screening, focused on low-income countries.Deep learning algorithm trained with VIA inspection images, developed on an Android-based device83 VIA inspection images, used to assess the diagnostic performance of the system compared to specialists and healthcare professionalsSensitivity: 80.0%
Especificity: 83.3%
Area under the ROC curve (AUC): 0.84
Field validationNakisige et al. [54]
PathChatA multimodal generative AI copilot developed for human pathology. It provides diagnoses, answers questions based on images and text, and assists in clinical and educational decision making.It uses a multimodal learning model that combines a vision encoder (trained with 100 million histopathological images) and a LLM (Llama 2 with 13 billion parameters). The architecture includes a multimodal projector module that connects the output of the vision encoder to the LLM56,000 instructions and 999,202 question-and-answer interactions. Additionally, it was validated with whole-slide images from various organs and over 54 diagnosesPrecision (image-only): 78.1%
Precision (with clinical context): 89.5%
Precision (open-ended questions): 78.7%
Precision (microscopy): 73.3%
Precision (diagnostic): 80.3%
Precision (clinical questions): 80.3%
Precision (auxiliary tests): 80.3%
Win rate (pathologist preference): 56.5%
Development and validationLu et al. [55]
Artificial Intelligence Cervical Cancer Screening System (AICCS)Cervical cancer screening system using AI for detection and classification of WSIs of cervical cytology. The system assists in identifying abnormal cells and provides accurate classification of cervical cytology grades.Combination of a cell detection model based on RetinaNet and Faster R-CNN (Convolutional Neural Network) for patch-level detection, and a whole image classification model using Random Forest and DNN16,056 participants, with prospective and retrospective validation data, and a randomized observational clinical trialSensitivity: 0.946
Especificity: 0.890
Area under the ROC curve (AUC): 0.947
Precision: 0.892
Clinical validation (Randomized observational clinical trial)Wang et al. [38]
No descriptionThe application classifies cervical images into five categories (normal, mild dysplasia, moderate dysplasia, carcinoma in situ, and severe dysplasia)Hybrid model using 3D CNN (3D Convolutional Neural Networks) and Vision Transformer (ViT) with a Kernel Extreme Learning Machine (KELM) classifierThe study used the Herlev Pap smear dataset, which contains 917 cervical cancer image samples divided into 5 classesAccuracy: 98.6%
Precision: 97.5%
Sensitivity: 98.1%
Especificity: 98.2%
F1 score: 98.4%
Research purposesAbinaya & Sivakumar [56]
BiomedParseBiomedical foundational model for image analysis that simultaneously performs segmentation, detection, and object recognition tasks in biomedical images. It is capable of identifying and labeling all objects in an image based on text descriptions.Multimodal model based on the SEEM architecture, which includes an image encoder (Focal-based), a text encoder (PubMedBERT-based), a mask decoder, and a meta-object classifier3.4 million triples of image, segmentation mask, and semantic label of the biomedical object and 6.8 million image-mask-description triples, from over 1 million images. The semantic labels encompass 82 major biomedical object types across 9 imaging modalitiesAverage Dice score: 0.94 in segmentation tasks, superior to MedSAM and SAM.
In object recognition, it outperformed Grounding DINO in precision (+25%), recall (+87.9%) e F1-score (+74.5%).
Invalid description detection: AUC-ROC with a 0.99 detection rate and 0.93 precision in identifying text prompts that do not describe objects present in the image
ValidationZhao et al. [57]
MedImageInsightOpen-source medical image embedding model capable of performing classification, image-image search, and report generation across various medical domains, including radiology, dermatology, histopathology, and more. The model aims to provide evidence-based decision support for medical diagnosis.Two-tower architecture (image-text) optimized with UniCL, using a vision encoder (DaViT with 360 million parameters) and a text encoder (252 million parameters). It supports report generation and sensitivity and specificity adjustments via ROC curvesPatchCamelyon: 262,144 breast histopathological tissue images, each labeled with the presence or absence of cancermAUC (mean Area Under the ROC Curve): 0.963
MI2 (CLs.): 0.943 (Classification metric using the MedImageInsight model)
MI2 (KNN): 0.975 (Classification using KNN with image embeddings)
Open-source model for research purposesCodella et al. [58]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

de Lima, R.C.; Quaresma, J.A.S. Exploring Deep Learning Model Opportunities for Cervical Cancer Screening in Vulnerable Public Health Regions. Computers 2025, 14, 202. https://doi.org/10.3390/computers14050202

AMA Style

de Lima RC, Quaresma JAS. Exploring Deep Learning Model Opportunities for Cervical Cancer Screening in Vulnerable Public Health Regions. Computers. 2025; 14(5):202. https://doi.org/10.3390/computers14050202

Chicago/Turabian Style

de Lima, Renan Chaves, and Juarez Antonio Simões Quaresma. 2025. "Exploring Deep Learning Model Opportunities for Cervical Cancer Screening in Vulnerable Public Health Regions" Computers 14, no. 5: 202. https://doi.org/10.3390/computers14050202

APA Style

de Lima, R. C., & Quaresma, J. A. S. (2025). Exploring Deep Learning Model Opportunities for Cervical Cancer Screening in Vulnerable Public Health Regions. Computers, 14(5), 202. https://doi.org/10.3390/computers14050202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop