1. Introduction
Pathology is a branch of medical science that studies and diagnoses diseases. Such diagnostic information is crucial to both clinicians and patients, as accurate and timely diagnosis plays a crucial role in patient treatment [
1]. Not forgetting that it is difficult and time-consuming to exchange pathological microscopic slides for a second opinion [
2]. AI is a subfield of computer science that deals with the development of computer algorithms to perform tasks related to human intelligence, such as problem solving, decision-making, visual perception, and pattern recognition [
3]. AI has numerous possible applications in pathology, from pattern analysis in tissue to detection of patients at the highest risk of cancer [
4,
5]. Completely, AI can facilitate the field of pathology by increasing the accuracy of diagnosis, improving treatment decisions, and improving patient outcomes [
6]. The development of DL, machine learning (ML), and AI, along with the extraction of relevant information from image data and their computational analysis, provides a theoretical foundation for the development of a new field, called “digital pathology (DP)” [
7,
8]. DP is a procedure that enables high-resolution digital imaging of sections of tissues on glass slides that are then visualized with an optical microscope. Digital imaging methods have transformed from the use of cameras for producing static images to WSI, which is a recent method [
4,
9].
The aim of WSI, often referred to as “virtual microscopy,” is to achieve the goal of traditional light microscopy via computerization. To perform WSI, two steps must occur. The first part is the scanning of glass slides with technology that allows the final image or files (often a large digital image known as a “digital slide”) to be created, which is a very sophisticated scanning of the glass slides. The second part is examining and/or analyzing the large digital files in specified software [
10]. During the last decade, a wide range of commercially available WSI instruments have been developed. A list of common WSI systems and their respective vendors is provided in
Table 1.
The challenges encountered with AI applications in pathology are numerous and can occur at any level of the workflow, from the pre-analytical phase in the pathology laboratory to the AI application [
16,
17]. This review seeks to address the gap with an updated and integrative perspective that considers clinical relevance and real-world computational platforms in AI-based DP. Hence, this review aimed to comprehensively examine the challenges associated with the use of WSIs in AI -based DP applications. This review paper offers a WSI-centered perspective on how DL and AI techniques function inside and are influenced by the special features of whole-slide imaging.
Consequently, this review synthesizes the entire workflow from WSI generation and preprocessing to downstream DL applications—including cancer detection, prognostic modeling, biomarker prediction, and future directions such as multimodal foundation models and privacy-preserving federated learning—thereby establishing a comprehensive roadmap for WSI-centered AI in pathology.
This review, therefore, takes a WSI-attached perception, examining DL and emerging computational approaches only insofar as they directly impact the challenges and opportunities of AI channels built upon WSIs.
The objective of this review is to critically estimate the technical, operational, ethical, and regulatory challenges related to AI-based analysis of WSIs and to outline future research pathways that can help robust, generalizable, and clinically implementable DP systems.
Research Gap
Although there have been multiple reviews describing the potential roles of DP and AI, the majority are simply descriptive summaries of existing resources, without any synthesis or critical discussion [
17]. Such reviews seem to cover mostly broad areas, such as multi-center collaboration, data standardization, or algorithm performance, while generally not systematically discussing or mentioning emerging technologies like foundation models, explainable AI, or federated learning [
18]. Further, comparisons of various DL architectures (VGG, ResNet, Inception, EfficientNet) are either largely absent or only briefly mentioned. The originality of this paper is that it provides a more expansive roadmap, from WSI acquisition and processing to DL tumor detection and biomarker prediction, while providing critical discussion on the themes of generalizability, ethical implications, and regulatory preparednessAnother major gap in the literature is the breadth of scope regarding applications. Most previous reviews have focused on oncology tasks that are tumor-related [
19], while not recognizing many other areas of equally high importance (e.g., prognostic modeling, biomarker predicting, cancer subtyping, and non-neoplastic diseases). Furthermore, the paper highlights major gaps in how ethical and privacy issues have been addressed in the literature. Previous works often mention data privacy superficially without offering concrete guidance on privacy-preserving AI or global regulatory strategies [
20].
This review takes an aspirational perspective and discusses the transformative potential of foundation models, explainability, federated learning, and multimodal integration in the coming decade. Therefore, while summarizing the achievements of the past, this review also establishes a clear direction for future research, whereas earlier reviews provided a list of previous studies. The main contributions of the review can be stated as follows:
This review provides an end-to-end roadmap from WSI acquisition and preprocessing to DL-based diagnostic, prognostic, and biomarker prediction models. This review also focuses on the specific challenges and opportunities associated with WSI-centered AI workflows.
This review adopts a narrative synthesis approach, informed by a structured systematic search to identify relevant WSI-focused AI studies.
2. Methodology
This work is designed as a narrative review supported by a systematic literature search, aimed at synthesizing key challenges and future directions in AI-based DP. Although a PRISMA-style flow diagram is included to ensure transparency in article selection, the review does not constitute a full systematic or scoping review.
A comprehensive search of the literature was performed in multiple electronic databases, including PubMed, Scopus, and Web of Science. The final search results were presented on 15 July 2025. No medical librarians or information specialists were involved in developing the search strategy. The complete search strings for each database are available in
Appendix A. Keywords related to “whole slide imaging”, “digital pathology”, “artificial intelligence”, “deep learning”, “computational pathology”, and “foundation models”. Boolean operators were used to expand or limit the results when appropriate (e.g., “digital pathology” AND “deep learning”, “whole slide image” AND “explainable AI”).
Inclusion Criteria: Eligible studies were considered based on the following:
Primary research or peer-reviewed systematic reviews conducted according to recognized quality standards (e.g., PRISMA) in relation to AI or DL in DP using WSI.
Addressed technical challenges, clinical use cases, ethical issues, or new methodologies.
Studies published in a peer-reviewed journal or conference proceedings.
Studies published in English.
Studies were eligible if published from 2015 onwards to capture the state of the DL.
- 6.
Non-peer-reviewed literature such as editorials, commentaries, and opinion pieces (unless they offered significant conceptual contributions).
- 7.
Studies that did not use DP or WSI (i.e., used AI against radiology and the studies were purely radiology).
- 8.
Manuscripts that did not have enough methodological detail, were only proof of concept without validation, or were duplicates across databases.
Study Selection: We conducted independent screening of titles and abstracts, then performed a full-text review. Any disagreements were resolved with discussion and consensus. A PRISMA flow diagram was then generated to visualize the number of records identified, screened, included, and excluded, and the reasons for exclusions at each stage (
Figure 1). Two reviewers independently screened all titles and abstracts, followed by full-text assessment of potentially eligible studies. Discrepancies were resolved through discussion and consensus between the two reviewers; a third reviewer was available but was not required to judge any disagreements.
Data Extraction and Synthesis: From each paper, data were extracted, including aims and objectives, characteristics of the dataset, AI approach (e.g., CNN, self-supervised, foundation models), performance outcomes, limitations, as well as clinical relevance. Instead of simply providing a descriptive assessment, findings were systemically synthesized to discover patterns, gaps, controversies, and trends in the field. As part of data extraction, each included study was categorized as either DL-based (e.g., CNNs, MIL architectures, transformer-based or self-supervised models, foundation models) or non-deep-learning (e.g., classical ML using handcrafted features, rule-based algorithms, and traditional image analysis pipelines). This categorization was used to structure narrative synthesis and highlight methodological trends.
A narrative synthesis approach was used, whereby results were grouped into predefined thematic areas and not analyzed using a formal qualitative framework. All included studies contributed to the aggregation; however, it is worth noting that each study was not necessarily discussed individually.
3. Whole-Slide Imaging (WSI)
WSI is the conversion of traditional glass slides into digital images. Technological advances have led to the development of high-throughput WSI scanners that can digitize large numbers of slides of stained histological specimens for a range of applications in a short period of time [
6,
21]. DP refers to the gathering, management, and interpretation of pathology data [
6]. The establishment of DP workflow in an AP laboratory opens opportunities for computational pathology; image analysis techniques, such as quantification and measurement; and the application of AI in computer-aided diagnosis [
6]. WSI is the digital representation of histology specimens scanned with digital scanners. Digital scanners generate high-resolution images by applying several magnifications and focus planes at various resolution levels. The workflow of WSI systems necessitates appropriate histology technology to overcome practical obstacles such as poor staining and tissue folds, which can have a negative impact on the quality of the scanned slides [
22].
3.1. History
The earliest WSI scanners, introduced in the late 1990s, appear significantly outdated when contrasted with today’s advanced systems [
23]. Digital imaging in anatomic pathology before the advent of WSI was mostly dependent on cameras mounted on microscopes to create “static” digital images. Since they only captured areas of a glass slide, these static images had little therapeutic value [
24].
The virtual microscope system appeared in 1997 to scan extensive areas of slides using robotic microscopy. The method utilized a combination of robotic equipment and microscopes, and computers to produce a tiled mosaic, which resulted in a composite “slide image.” The system faced two main limitations because it required an extended slide scanning time, and it focused on scanning a single broad area [
10].
The next important advance in WSI was the introduction of Interscope Technologies’ automated which could capture full slides at high resolution while being time-efficient and cost-effective. The victory they achieved brought a new era, as numerous types of automated, low-cost WSI scanners became commercially available [
25]. There are numerous hardware and software options available in the present DP environment, each with unique limits and added value [
26] (
Figure 2).
3.2. Workflow of WSI in Diagnostic Pathology
High-quality (WSI) and precise annotations are key fundamentals to developing effective AI models in pathology, as they enhance image classification and predictive performance, ultimately supporting pathologists in making accurate diagnoses.
3.2.1. WSI Acquisition
For tissue slide digital transformation, obtaining high-quality WSIs is essential. However, due to small sample sizes for AI training, biopsies from patients with aberrant tissues frequently encounter difficulties in morphology and tissue development. The target tumor or disease may be small and dispersed, even if the tissue on the slide is huge [
21].
To create a paraffin tissue block, biopsy or excision samples are first preserved in formalin. The block is then sliced into very thin sections (3–4 µm thick) and mounted on glass slides, following standard slide preparation procedures. Once the slides are prepared, they are digitized using a slide scanner in a DP system. The scanner, equipped with ×20 or ×40 magnification lenses, captures images in either a line or patch scanning mode, stitches them together, and produces WSIs [
27]. In some cases, additional tests such as histochemistry, immunohistochemistry, or fluorescence in situ hybridization may also be performed for further analysis [
28].
WSI are typically acquired at 20× or 40× magnification, resulting in a spatial resolution of approximately 0.25 to 0.50 μm per pixel. Common file formats include proprietary formats (such as svs, ndpi, and mrxs) as well as standard formats like DICOM-WSI. File sizes generally range from 1 to 8 gigabytes per slice, depending on resolution, compression, and image depth, which contributes to the increasing storage requirements in large laboratories.
A slide scanner is used for digitizing the slides once they have been prepared. Automated stage systems and high-resolution slide scanners capture a lot of adjacent fields of view (tiles) at different magnifications. These tiles are then stitched together by specialized software techniques to create high-resolution WSIs [
29].
To create high-resolution digital photographs, the full histology glass slide containing the sectioned and stained tissue samples must be scanned. Depending on the WSI scanner being used, these pictures can be stored in a variety of digital forms. These files can be viewed with open-source WSI viewers like QuPath, Cytomine, Orbit, ASAP, and OpenSlide with OpenSeadragon. When transferring image files, the viewer and file size determine the type of file to use, which affects hardware performance and transmission (
Figure 3).
3.2.2. Data Sorting
Once WSI are captured, the scanner initiates automatic preprocessing of the digital slides and stores them either on local servers or in cloud computing [
30]. This storage can be a vendor-neutral, centralized archive that works with several kinds of medical imaging. A major challenge also lies in securing sufficient storage capacity to manage the substantial data volume generated by each scanned slide [
31,
32].
Implementing WSI within a DP system can be costly, and these expenses tend to increase alongside the growth of a hospital’s patient load [
30,
32,
33].
3.2.3. Preprocessing Phase
The pre-processing of WSI entails the omission of non-informative components that could include slide backgrounds and non-specific artefacts for the preparation of a homogeneous dataset for analysis purposes. The phases in pre-processing include the detection of artefacts, stain normalization, tissue segmentation, and tiling. Removing artefacts is essential to reduce diagnostic variability and ensure irrelevant data is eliminated [
31]. Additionally, stain normalization decreases erroneous classifications and predictions by correcting for the variability created by differences in staining, illumination, tissue preparation, and scanning conditions. Traditional image processing methods will apply these phases explicitly using well-understood techniques to improve the interpretability and accuracy of the models [
31].
For intensity, the processing of Haematoxylin and Eosin (H&E) images often starts with color deconvolution, which allows for stain separation by recombining RGB pixel values. The binary images then undergo convolution and thresholding to expose structures of interest to the analyst; this is sometimes accompanied by procedures to separate or define clustered objects. Analysts may further separate clustered objects from each other or define their edges [
34,
35].
Additional approaches rely on statistical or mathematical methods to classify the cells based on shape, greyscale conversion, and histogram analysis, or by employing geometric transformations (such as rotation, scaling, and cropping). During DL model training and prediction, spatial tiling of images and reshaping images into a format compatible with the algorithms required by the systems [
36].
4. DL Pipelines in Digital Pathology
DL, particularly Convolutional Neural Networks (CNNs), has revolutionized the analysis of medical images by identifying complex visual patterns on tissue slides [
30,
37]. CNNs conservatively use memory by parameter sharing, by applying a filter to a local region and not each pixel individually, as well as using filtering techniques that compress the image window into a single pixel output [
37].
In most cases, full-scale image resolutions reach gigapixels, far exceeding the input size of standard DL architectures, which typically process squares smaller than or equal to 512 × 512 pixels. Therefore, sampling strategies based on corrections, downsizing, or tiling are required, which can lead to loss of information, reduced contextual awareness, and increased computational costs during training and inference.
A DL model’s architecture is governed by the number of layers, filters, and links between inputs and outputs, with datasets often divided into training, validation, and testing subsets. The validation procedure enables hyperparameter optimization, and the technique of early stopping can be employed to mitigate the impact of overfitting, given the millions of parameters in these models [
38]. For classification tasks, the following well-known architectures have been used: VGG, ResNet, Inception, EfficientNet, and AlexNet, and for segmentation tasks, UNET or FCNs have been employed, typically in GPU hardware environments, and using DL frameworks such as TensorFlow 2.0 or PyTorch 2.0 [
33]. The use of DL in WSI analysis has developed rapidly, with CNN based methods achieving reliable accuracy at classifying tissue types (2014–2017) [
39], and then methods based on attention and Multiple Instance Learning (MIL) architecture (2017–2020) that are conducive to training on WSI datasets of large sizes without requiring pixel-wise annotation [
40].
Recently, transformer-based models such as Vision Transformers (ViTs) have surpassed CNNs in cancer subtyping and biomarker prediction by capturing long-range spatial dependencies [
41]. Foundation and multimodal models now integrate large datasets to enhance generalizability [
40,
41].
Nonetheless, other obstacles remain, including data heterogeneity, the laborious nature of slide annotation, and limited model explainability. While attention maps provide some interpretability, most DL models remain “black boxes,” raising questions about trust and regulatory approval [
42]. Furthermore, dataset bias and fairness remain key challenges, as models trained primarily on Western data may underperform in other populations [
19]. Despite continued improvement, CNN-based approaches still have limits in stain variability, dataset generalization, and computing efficiency, highlighting hybrid CNN-transformer architectures as a viable next step in DP research.
Distribution of AI Approaches Across the Methodology of the Included Studies
Reviewing the included studies for the use of DL as a methodology has shown that the majority incorporated DL-based approaches (n = 23 out of 76, 30%), which reflects the predominance of CNNs, MIL architectures, and transformer models, as well as the development of foundation models in modern digital pathology. On the other hand, four (5%) of the included studies employed non-DL models in their methodologies, such as handcrafted feature extraction, rule-based segmentation, or traditional image analysis methods (
Table 2). This distribution highlighted the fast-track shift toward DL-based model in WSI.
5. Evolution and Applications of WSI
DP systems seek to enhance the efficiency and accuracy of pathologists’ decision-making. Their application hinges upon the analytical intention of each pipeline. Models leveraging ML have successfully been utilized to diagnose disease, characterize tissue types, predict outcomes, and quantify abnormal tissue microenvironments [
43]. However, WSI applications need a huge storage cloud for data, high-performance computing, and Patch Sampling Methods. DL has transformed quantitative image analysis (QIA) into DP, making biomarker discovery and cancer detection more accurate [
43].
One important use of DL is the automated classification of cells into specific types through nuclear segmentation and morphology. As an example, estrogen receptor status can be predicted from features of the nucleus on H&E-stained slides [
44,
45]. The most sophisticated WSI-based AI applications are consistently biomarker prediction, cancer subtyping, and tumor identification. CNNs have achieved diagnostic equivalency in lung, breast, and prostate cancer diagnosis to expert pathologists [
12,
38]. Automated quantification of biomarkers like Ki-67 status or ER/PR status also reduces variability amongst observers [
14,
37]. Overall, AI is particularly adept at repetitive tasks based on morphology. The main steps in the DP workflow include WSI acquisition, storage and pre-processing, DL modeling, and application. Despite the rapid development and spread of WSI, various challenges are still present that include regulatory and interoperability issues.
Benefits and Applications of WSI in Diagnostic Pathology
AI and WSI in pathology provide significant benefits by improving workflow efficiency and cost-effectiveness. AI-assisted diagnosis enhances pathologists’ productivity by automating repetitive tasks and enabling precise quantitative analysis, leading to faster and more accurate results. Digital slides also remove the need for costly external consultations and facilitate standardized workflows that eliminate the handling, storage, and transport of glass slides, thereby reducing overall operational expenses [
46,
47]. Moreover, it will facilitate remote pathological demands as a provided service and as collaborative teamwork.
6. Digital Pathology Quality Management System
A comprehensive system called the DP Quality Management System (QMS), as clarified in
Table 3, was created to provide the best possible Dp services. Implementing (QMS) practices is closely related to WSI-AI accreditation, as the continuity of quality, verification, and workflow standardization primarily requires the deployment of DP systems, which directly depend on a reliable model and patient safety. The DP Quality Essentials framework comprises twelve core elements designed to ensure quality, compliance, and continuous improvement. Foundational components include leadership, clear governance structures, and alignment of laboratory goals with institutional objectives, supported by effective communication and succession planning [
48]. Facilities and Safety Management focus on maintaining a safe and efficient workspace through environmental controls and protective measures. Personnel Management ensures staff competency via training, evaluations, and ongoing education, while Supplier and Inventory Management oversee equipment procurement and tracking [
49]. Equipment Management optimizes the performance of scanners and other related equipment through qualification processes, preventive maintenance, and constant monitoring for performance. Equipment Management optimizes scanner performance through qualification, maintenance, and monitoring. Process Management standardizes procedures for slide digitization and image storage, supported by quality checks, and Documents and Records Management maintains updated SOPs and compliance documentation [
50].
Information Management protects data integrity with secure IT systems and LIS integration, while Occurrence Management and Assessments address non-conformities and ensure quality through audits and proficiency testing [
51]. WSI allows teleconsultation, proficiency testing, and slides archiving, while AI reduces interobserver variability and facilitates diagnostic consistency. AI algorithms can be used to audit pathologists’ diagnoses and thus reduce the risk of errors (e.g., For example, the College of American Pathologists (CAP) uses WSI in proficiency testing and uses AI to provide feedback on diagnostic consistency [
52].
7. Routine Diagnosis
WSI shows 89–99% agreement with standard microscopy; traditional light microscopy is the gold standard for clinical diagnosis, so the reported concordance rates indicate agreement between WSI-based assessments and traditional microscopic evaluation, although image quality and reliability can be affected by the scanner performance. AI overcomes some focus and staining inconsistencies (through deep focus and generative models) and allows for improved standardization of images, which is essential for reliable diagnosis based on AI, especially in immunohistochemistry [
53]. AI image analysis outperforms human interpretation in counting and measuring nuclear features, delineating and classifying areas of tissue, and quantifying the data. For example, Kumar et al. developed an algorithm for nuclear segmentation into 3 classes, which improves pixel-level detection of structural boundaries [
1], while AI models can identify and localize relevant diagnostic cells (e.g., tumor nuclei, Ki-67 hotspots) and provide testable tissue classifications for lesions like ductal carcinoma in situ [
54,
55].
In addition, using WSI offers a pathway to reduce costs related to data management and long-term storage, in part supporting sustainability in pathology [
4,
56]. Raciti et al. performed validation studies on the Paige Prostate Alpha system as a case study showing improved diagnostic accuracy, efficiency of interpretive work, and slide review time [
57]. While these products are promising, many validation studies are limited to single centers and curated datasets, limiting generalizability. Like WSI studies, algorithms generally demonstrate strong performance, but external validation and standardization for software integration into the pathology workflow remain lacking. AI has shown marked performance in IHC, providing a corresponding increase in overall diagnostic accuracy [
58], as is related to automated Ki-67 quantification in prostate cancer, and to strong performance with the manual method [
14]. In addition to improved diagnostics with AI image analysis, minimization of the administrative workload resulting from manual slide handling and retrieval from archives, and improved efficiency and reduced costs, are further anticipated benefits of the use of pathology workflows utilizing WSI [
59,
60].
8. Tumor Detection and Classification
AI-based WSI analysis has demonstrated strong performance in primary tumor detection. Notably, Coudray et al. [
12] developed a DL model capable of distinguishing between lung adenocarcinoma and squamous cell carcinoma on WSIs with accuracy comparable to molecular testing. Despite these advancements, most AI detection systems have been validated retrospectively rather than through prospective real-world trials. Transferability and generalization remain limited due to variations in staining methods, scanner types, and patient populations across institutions. Therefore, while tumor detection is among the most mature AI applications in pathology, broader clinical adoption requires large-scale, multi-center validation studies and seamless integration into routine diagnostic workflows.
8.1. Prognostic Modeling
AI-enabled WSI has introduced promising opportunities for prognostication and precision oncology. Kurian et al. [
59] demonstrated that DL could stratify luminal breast cancer patients into prognostic groups more effectively than traditional histopathological grades or clinical features. Similarly, Bossard et al. [
61] developed an image-based risk score model for cutaneous melanoma that outperformed standard clinical predictors of survival. Saltz et al. [
62] used CNNs to analyze tumor-infiltrating lymphocyte (TIL) patterns from The Cancer Genome Atlas, revealing that the spatial organization of TILs was prognostic across 13 cancer subtypes.
Our field is witnessing an important development: AI models are not simply mimicking normal morphologic patterns discernible to pathologists. Rather, we are starting to see models able to differentiate morphological patterns that are imperceptible to the human eye and may provide critical digital biomarkers of prognostication. Despite exciting progress made by these AI-enabled models, a substantial gap still exists between potential and reality. There are also ongoing discussions regarding whether AI-derived prognostic information should act in a supplementary manner to traditional clinicopathological factors or should obviate consideration of traditional clinicopathological factors altogether.
8.2. Biomarker Prediction
Arguably, predicting molecular biomarkers directly from routine H&E WSIs stands as the most exciting application of AI in DP. Naik et al. [
45] showed that determination of hormone receptor (ER/PR) status in breast cancer could be done directly from histology slides, and Blessin et al. [
14] validated Ki-67 labeling indices in prostate cancer by automated quantitative multiplex immunohistochemistry. Other teams have pushed forward with the concepts of prediction by proposing predictions of IDH1 mutation status in glioma and MSI status in colorectal cancer [
15,
63].
AI could reduce our dependence upon expensive molecular tests, help speed the discovery of new biomarkers, and open the doors of precision oncology. However, there are huge controversies: a model is likely to fail when the cohort model is not sufficiently similar due to preparation and staining issues. Moreover, regulatory units have steered clear of allowing the substitution of molecular testing with computationally derived appraisals, and for good reasons, we may have to ask difficult ethical and clinical questions about reliability, reproducibility and patient safety.
8.3. Cancer Subtyping
AI-based models have demonstrated strong performance in distinguishing histological subtypes of gliomas, breast, and colorectal cancers [
54,
64]. Naik et al. [
45] showed that DL models were able to classify breast cancer subtypes and additionally, respond to subtle morphological features that correlated to the underlying genomic profiles. Similarly, in colorectal cancer, Sun et al. [
64] found that AI-based histological analysis could improve patient risk stratification among stage III patients and could be useful in predicting treatment response. This reflects an emerging trend; that is, the potential for WSI-based subtyping models to act as a bridge between morphology and genomics. However, further development is needed for rare tumor subtyping, where data is limited and annotating diseases is particularly onerous.
8.4. Beyond Oncology
Although oncology remains the focus of AI and WSI research, applications in non-cancer domains are steadily expanding. In transplant pathology, AI has been applied to kidney biopsies to quantify features associated with rejection, providing objective and reproducible assessments. Similarly, CNNs have been developed for inflammatory bowel disease (IBD) grading, offering standardized and less subjective evaluations. In pediatric pathology, AI and WSI are increasingly used for analyzing rare tumors and developmental disorders. Hutchinson et al. [
52] highlighted that federated WSI networks among pediatric centers could accelerate research on rare diseases and reduce diagnostic delays.
These developments demonstrate that AI in DP is evolving from simple detection to advanced prognostic and molecular prediction models. While adoption is most advanced in oncology, emerging applications in transplantation, inflammation, and pediatrics highlight their growing clinical significance. However, challenges persist regarding model generalizability, validation, regulatory approval, and interpretability. The future of AI in DP lies in integrating multimodal data—such as WSI, genomics, and clinical information—through foundation models and federated learning to achieve scalable, robust, and clinically applicable systems.
9. Current Challenges of WSI in AI
While AI in DP has significantly advanced disease diagnosis, it still faces several challenges. These include poor image resolution and quality, need for large histologically annotated datasets, shortage of clinical and technical expertise, data dimensionality and hardware constraints, model generalizability, and lack of transparency [
1,
46]. Compared with X-ray and CT scans, WSI produces gigapixel resolution images, which are considerably larger. Files this large can potentially slow image processing and analysis and burden storage capacity. Because of the limitation of hardware, DL algorithms are generally trained on small-sized images (approximately 250 × 250 pixels), thus down sampling images before model input [
65]. Such downscaling can result in losing important information and decreased model performance [
66].
One more challenge of significant concern is the transferability of DP protocols across various laboratories or institutions. Differences in hardware, tissue processing and staining protocols, image scanning, and annotation procedures can induce variabilities in image preprocessing. These variabilities result in quality variabilities that affect the performance of DL models [
67]. By increasing the variability of the training data, techniques like data augmentation, transfer learning, and domain adaptation can be used to overcome this obstacle and improve the generalizability of the models [
27]. Practitioners must explain and validate their decisions, but it is hard to rely on AI models’ outputs because they frequently lack a consistent rationale. Enhancing transparency and users’ trust entails developing DL techniques with interpretable decision-making processes. Attention-based multiple instance learning is one such possibility, and it can be used to identify tissue regions that are most relevant to diagnosis [
68].
Developing an AI model requires the integration of clinical-grade computational pathology, statistics, AI, and clinicians. This process involves clinical data collection and preparation, annotation, model training, and validation. Each expert plays a crucial role, from guiding data analysis to building robust models and validating outcomes. However, assembling such a diverse team is both time-consuming and costly, adding to the complexity of the development process. The challenges in DP can be categorized into three stages: pre-acquisition, acquisition, and post-acquisition of WSI. During the scanning process, defects, unwanted tissues, and artifacts on the glass slide are transferred to the digitized image, significantly impacting AI performance [
69].
9.1. Current Technical and Operational Challenges in WSI Acquisition
Case selection is very important, and it should consider many subtypes of disease and histologic subtypes, whether neoplastic or non-neoplastic. Biopsy samples of the selected disease may be morphologically limited or not sufficient for an AI study. For instance, in many nasopharyngeal carcinoma cases, a punch biopsy is performed in which the sampled tissue is small, and almost all further treatment of the patient is planned and carried out based on this biopsy. Nasopharyngeal biopsies usually consist of small and fragmented tissues [
70,
71]. Preparation of tissue slides for WSI can be tedious, complicated, and time-consuming, particularly if a small or diffuse area of tumor or disease exists in the larger piece of tissue, often leading to challenges regarding data and annotation [
72]. The tissue slide selected may also have areas of normal tissue, necrosis, cystic spaces, and areas of hemorrhage, which provides a selection and quality control mechanism [
35]. Various artifacts can occur due to cracks, scratches, and physical damage, among other factors, when slides or specimens are stored and handled, necessitating the need to re-section and re-stain. Proper storage is of utmost importance since slides can become contaminated, faded, or dehydrated from prolonged cover matrix exposure. Additionally, slides with immunofluorescence (IF) or fluorescence in situ hybridization (FISH) should be scanned promptly due to their technical sensitivities and the combination of technical expertise and specific protocols to generate their use [
35,
66,
73].
9.2. Challenges in WSI Acquisition
Before WSI can be obtained, tissue slide quality needs to be assessed, slides must be appropriately collected from archive storage, and the correct slides selected. The image quality can be adversely impacted by external factors, including cracks, breaks, and scratches, which usually result from physical exposure or of the plastic coatings during storage and cleaning [
68].
Depending on the extent of damage to the specimen, slides may need to be cleaned, coated, stained, or re-sectioned. Poor storage conditions can lead to contamination, dirt accumulation, fading of hematoxylin and eosin (H&E), immunohistochemistry (IHC), and histochemistry (HC) stains, as well as dehydration of the cover matrix. Since immunofluorescence (IF) and fluorescence in situ hybridization (FISH) slides are more complex to digitize and demand advanced equipment and technical expertise, they should be digitized promptly [
5,
20,
72]. Device-dependent challenges also play a crucial role: the optical system forms the image, the sensor digitizes it, and the hardware converts it into a final file. Light sources and optical components such as lenses, prisms, and mirrors produce variability that can influence the quality of an image [
74].
Magnification of the objective lens determines the file size and microns per pixel, but large data sets like TCGA can be devoid of complete magnification metadata. Adjusting the light source properly will alleviate color variability and minimize pre-AI normalization of input data [
75]. Features of the sensor and pixel size also impact resolution and can cause artifacts. A higher resolution image, generally, requires advanced and expensive hardware with robust computing processing capabilities [
61,
76].
In digital pathology, WSI-anchored AI pipelines require rigorous standardization and domain adaptation as it has shown a direct influence upon the performance and reliability of AI models used due to variations related to staining methods, scanner optics, image compression methods, color calibration, and slide storage or display conditions. These variations could result in domain shift and reduce it performance and ultimately its accuracy [
77]. Moreover, other factors related to illumination of the scanners, shift in calibration, or staining density would affect the prediction of the model and its degree of confidence. Finally, failing to address these potential causes may lead to diminished model generalizability, reduced robustness, and potential bias across institutions and patient subgroups [
78].
9.3. Post-WSI Acquisition Challenges
The storage and management of digital WSIs is a substantial challenge in DP. Storing WSIs has considerable financial and technical issues, as each slide can take up between 1 and 8 GB, and as the number of patients increases with slides, the upkeep, cost, and security of data long term can be expensive [
48,
79]. In any case, effectively searching a database and building cases is necessary because the data used to determine histopathologic and molecular data value and implications all depend on data usability [
59]. These issues are exacerbated by the variance in colors due to variation in tissue thickness, staining protocols, illumination, scanner type, and viewing device that cannot be reused, thus adding layers of complexity around standardization [
80,
81]. The colors that are shown for any WSI will differ based on the parameters of capture as well as the parameters of display [
82].
Generally, calibration corrects for device-based color inaccuracies, alignment standardizes color across datasets, and normalization ensures consistency for images in a dataset; all of these are critical for consistent and reliable AI training and performance within the model being used [
83]. However, it is a computational challenge to conduct stain normalization in real-time for high-resolution images, and the inputs in the data must be of good quality, or variations can lead to artifacts being input into algorithms. Variability of staining and slide preparation may change the color and contrast levels in images, leading to less preparation of the high-quality datasets commonly used, as preparation and annotating datasets is labor-intensive, requiring expert-level pathologist training, all contributing to the overall complexity of DP workflows [
80,
81,
82].
9.4. Legal and Ethical Challenges
There are various issues with Digital AI Pathology (DAIP) due to the limitations of AI. The type and quality of training data impact performance, which is a source of bias that may change diagnostic accuracy for different populations. Some studies have shown performance differences related to Black and White patients in cancer subtyping and mutation prediction, navigating data collection and model fairness issues [
46,
84,
85].
Although human expertise remains essential in advanced diagnostic interpretation, multidisciplinary teamwork, and ethically sensitive conversations, there are doubts as to how AI will impact pathology.
Moreover, the lack of clear regulatory oversight raises legal and ethical questions about accountability in cases of AI error [
86]. Another limitation is DAIP’s dependence on image-based analysis, while not bringing language-based reasoning to full fruition. There are large language models (LLMs), such as ChatGPT 4.0, that can help with general diagnostic reasoning; however, LLMs do not have patient-specific clinical knowledge and frequently return misleading or incorrect information and therefore need to be validated appropriately for clinical use [
87,
88,
89]. Ethical issues are also raised with data sharing and privacy. Under the common law duty of confidentiality, patient data has a duty of confidentiality, and processes utilized for data sharing must address that level of protection, unless the data is de-identified, or the patient consents to the information being shared, or if there is a legal obligation to share data [
20]. Privacy safeguards like access controls, de-identification, K-anonymity, secure storage, audit trails, and stringent contracts, in order to minimize re-identification of data, require appropriate ethical governance of the data from a DP perspective [
90].
9.5. Regulatory Approval Pathways
An additional and often underestimated challenge in implementing AI-based DP is the regulatory approval process required before clinical deployment. In the United States, the Food and Drug Administration (FDA) regulates such systems through pathways like 510(k) clearance, De Novo classification, or Premarket Approval (PMA), depending on the device’s risk category [
91]. Several AI-powered DP platforms, such as the Philips IntelliSite Pathology Solution and Paige Prostate, have already obtained FDA clearance, highlighting the feasibility but also the complexity of these regulatory processes [
92]. Similarly, in Europe, AI diagnostic tools must comply with the In Vitro Diagnostic Regulation (IVDR) and obtain CE marking to demonstrate safety and performance [
93]. These frameworks ensure clinical reliability, transparency, and accountability but can also slow innovation, as developers must provide robust validation evidence demonstrating interpretability, reproducibility, and data security before approval [
94]. Addressing these pathways is essential to bridge the gap between research-grade AI and routine clinical use.
Currently, many DL models are considered “black boxes,” which reduces their trustworthiness in a clinical context. This will necessitate that future research implements explainable AI (XAI) methodologies that link outputs to human-interpretable histological features (e.g., saliency maps that pinpoint parts of the images that involve nuclei or structures resembling glandular). In addition to technically explaining machine-generated outputs, there is a need for a model of shared accountability wherein AI is used to provide decision support to the pathologist, who is ultimately responsible for the diagnosis, ensuring that the pathologist remains the key actor within the clinical and medicolegal situation [
95].
10. Future Development and Research
The future of WSI in pathology appears encouraging, thanks to technical advancements that will improve diagnosis accuracy. Technologies such as high-resolution 3D imaging and multispectral imaging are poised to strengthen the association between pathology and radiology with improved accuracy, facilitate advanced tissue characterization through color-based classification, and enable multi-labeling analysis [
68,
96]. The incorporation of WSI and AI will also completely transform research collaboration, particularly in pediatric pathology and with rare disease cases, by providing secure digital transfer of slides, speed and reproducibility, and a federated network of research centers [
52,
97].
Further, when combined with ML, WSI offers opportunities for better tumor risk stratification, revealing important prognostic markers, and automating complex diagnostic tasks with traditional visual-based assessment processes [
61,
62]. AI or ML systems can be presented to provide improved standardization in oncologic or perinatal pathology, providing more reproducible and objective results [
52,
98]. Ultimately, interoperability through the universally adopted DICOM standards will improve data transfer and sharing across vendors and institutions. The emergence of AI images and optimized algorithms with imaging as a technology will allow pathologists to transition to being strategically aware and data-based providers in patient management. As a result, WSI has a role as a major tenet of DP in the future practice of pathology [
26,
99].
WSI and AI research are anticipated to progress in ways that fundamentally revolutionize this field, which will go far beyond current axes of standardization and multi-center research. A major emerging emphasis will be on the construction of foundational and multimodal AI models that combine multiple data types—such as medical imaging (WSIs and MRIs), genomic (DNA sequencing), and clinical (demographics and lab) data—into a complete, data-rich portrait of a patient [
96].
These combined systems will maximize precision oncology and enable the formation of “digital twins,” which are virtual representations of a patient that predict treatment response and disease progression, built from multiple linked data streams. Hit upon privacy and data sharing will also be important. Emerging AI techniques, including federated learning, differential privacy, and homomorphic encryption, will facilitate secure learning amongst institutions while sharing little or no patient data. These approaches could help overcome one of the most persistent barriers to DP, the limited availability of diverse, large, multi-institutional patient-centered dataset, to build a safe and generalizable AI system in pathology [
99].
11. Conclusions
In contrast to earlier reviews that largely provide descriptive summaries of digital pathology or focus on isolated applications, this review presents a WSI-centered, end-to-end perspective linking slide acquisition, preprocessing, storage, and standardization with downstream AI modeling, clinical deployment, and regulatory oversight. By systematically relating pre-analytical, analytical, and post-analytical WSI characteristics to AI performance and generalizability, this work provides an integrated framework to guide future research and clinical translation in digital pathology.
The predominance of deep learning in WSI research reflects its suitability for complex tasks such as biomarker prediction, cancer subtyping, and prognostic modeling through end-to-end feature learning. Nevertheless, classical machine-learning approaches remain relevant for simpler tasks, resource-limited settings, and scenarios in which interpretability is prioritized. Across all methodological paradigms, challenges related to data quality, stain variability, domain shift, computational burden, and regulatory constraints continue to limit robust clinical deployment.
In the short term, challenges in WSI-based AI can be mitigated through standardized staining and scanning protocols, routine quality control, participation in multi-center consortia, adoption of DICOM-WSI and minimal metadata standards, and the use of stain normalization and data augmentation to reduce domain shift; over the longer term, progress will rely on the development of foundation and multimodal models, federated and privacy-preserving learning frameworks, and robust clinical validation through prospective trials, benchmarking, and post-deployment surveillance to ensure safe, generalizable, and clinically deployable AI systems.
Recent regulatory developments, including updated guidance from the U.S. Food and Drug Administration (FDA) for AI/ML-enabled medical devices and the implementation of the European Union AI Act (Regulation (EU) 2024/1689), have strengthened oversight of AI-enabled WSI systems by classifying AI-driven medical devices as high-risk and requiring compliance with both AI-specific and existing medical device regulations to ensure safe clinical deployment. These frameworks emphasize transparency, human oversight, and accountability as prerequisites for clinical adoption.
Overall, the integration of AI with WSI has the potential to fundamentally transform diagnostic pathology, precision oncology, and research into both common and rare diseases. However, this potential can only be realized through a coherent, end-to-end WSI-centered workflow that recognizes slide quality, standardization, data governance, interpretability, and clinical validation as foundational requirements. Only through such an integrated approach can AI in digital pathology transition from promising research tools to trustworthy systems capable of safely supporting routine clinical decision-making.
Author Contributions
Conceptualization, S.A.O., J.A.M.A. and N.K.T.E.-O.; methodology, S.A.O., J.A.M.A., N.K.T.E.-O. and A.J.A.A.; formal analysis, S.A.O., J.A.M.A. and N.K.T.E.-O.; investigation, S.A.O., J.A.M.A. and N.K.T.E.-O.; data curation, S.A.O., J.A.M.A. and N.K.T.E.-O.; writing—original draft preparation, S.A.O., J.A.M.A., N.K.T.E.-O. and A.J.A.A.; writing—review and editing, S.A.O., J.A.M.A., N.K.T.E.-O. and A.J.A.A.; project administration, S.A.O., J.A.M.A., N.K.T.E.-O. and A.J.A.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments
During the preparation of this manuscript, the authors used ChatGPT 5.2 (OpenAI, San Francisco, CA, USA) for language editing, rephrasing technical content, summarizing literature, and refining structure. The authors have reviewed and edited the output as needed and take full responsibility for the content of the publication.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| WSI | Whole Slide Imaging |
| CNN | Convolutional Neural Networks |
| AI | Artificial Intelligence |
| DL | Deep Learning |
| DP | Digital Pathology |
| ML | Machine learning |
Appendix A
Table A1.
Provides the full database-specific search strings used in PubMed, Scopus, and Web of Science, including Boolean operators, MeSH terms (when applicable), and filters applied.
Table A1.
Provides the full database-specific search strings used in PubMed, Scopus, and Web of Science, including Boolean operators, MeSH terms (when applicable), and filters applied.
| Database | Specific Search Strategies | Filters |
|---|
| 1. PubMed | ((‘Whole Slide Imaging’ [MeSH] OR ‘whole slide imaging’ [title/abstract] OR ‘whole slide image *’ [title/abstract] OR ‘WSI’ [title/abstract]) AND (‘Digital Pathology’ [MeSH] OR ‘digital patholog *’ [title/abstract] OR ‘computational patholog *’ [title/abstract]) And (‘Artificial Intelligence’ [MeSH] OR ‘artificial | The English language, humans, and articles. |
| 2. Scopus | : “(TITLE-ABS-KEY (‘whole slide imaging’ OR ‘whole slide image *’ OR ‘WSI’) AND TITLE-ABS-KEY (‘digital pathology’ OR ‘computational pathology’) AND TITLE-ABS-KEY (‘artificial intelligence’ OR ‘deep learning’ OR ‘machine learning’ OR ‘foundation model *’ OR”explain | The English language and document types, article and review. |
| 3. Web of Science | TS = (‘whole slide image’ OR ‘whole slide image *’ OR WSI) AND TS = (‘digital pathology’ OR ‘computational pathology’) AND TS = (‘artificial intelligence’ OR ‘deep learning’ OR ‘machine learning’ OR ‘foundation model *’ OR ‘explainable AI’ OR XAI). | The English language and document types, article and review. |
References
- Kumar, N.; Gupta, R.; Gupta, S. Whole Slide Imaging (WSI) in Pathology: Current Perspectives and Future Directions. J. Digit. Imaging 2020, 33, 1034–1040. [Google Scholar] [CrossRef] [PubMed]
- Bera, K.; Schalper, K.A.; Rimm, D.L.; Velcheti, V.; Madabhushi, A. Artificial intelligence in digital pathology—New tools for diagnosis and precision oncology. Nat. Rev. Clin. Oncol. 2019, 16, 703–715. [Google Scholar] [CrossRef] [PubMed]
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: London, UK, 2021. [Google Scholar]
- Aggarwal, A.; Bharadwaj, S.; Corredor, G.; Pathak, T.; Badve, S.; Madabhushi, A. Artificial intelligence in digital pathology—Time for a reality check. Nat. Rev. Clin. Oncol. 2025, 22, 283–291. [Google Scholar] [CrossRef] [PubMed]
- Cui, M.; Zhang, D.Y. Artificial intelligence and computational pathology. Lab. Investig. 2021, 101, 412–422. [Google Scholar] [CrossRef]
- Wong, A.N.N.; He, Z.; Leung, K.L.; To, C.C.K.; Wong, C.Y.; Wong, S.C.C.; Yoo, J.S.; Chan, C.K.R.; Chan, A.Z.; Lacambra, M.D.; et al. Current Developments of Artificial Intelligence in Digital Pathology and Its Future Clinical Applications in Gastrointestinal Cancers. Cancers 2022, 14, 3780. [Google Scholar] [CrossRef]
- Koefoed-Nielsen, H.; Kidholm, K.; Frederiksen, M.H.; Mikkelsen, M.L.N. Expectations and Experiences Among Clinical Staff Regarding Implementation of Digital Pathology: A Qualitative Study at Two Departments of Pathology. J. Imaging Inform. Med. 2024, 37, 2500–2512. [Google Scholar] [CrossRef]
- Xu, C.; Jackson, S.A. Machine learning and complex biological data. Genome Biol. 2019, 20, 76. [Google Scholar] [CrossRef]
- Hanna, M.G.; Reuter, V.E.; Ardon, O.; Kim, D.; Sirintrapun, S.J.; Schüffler, P.J.; Busam, K.J.; Sauter, J.L.; Brogi, E.; Tan, L.K.; et al. Validation of a digital pathology system including remote review during the COVID-19 pandemic. Mod. Pathol. 2020, 33, 2115–2127. [Google Scholar] [CrossRef]
- Farahani, N.; Parwani, A.V.; Pantanowitz, L. Whole slide imaging in pathology: Advantages, limitations, and emerging perspectives. Pathol. Lab. Med. Int. 2015, 7, 23–33. [Google Scholar] [CrossRef]
- Borowsky, A.D.; Glassy, E.F.; Wallace, W.D.; Kallichanda, N.S.; Behling, C.A.; Miller, D.V.; Oswal, H.N.; Feddersen, R.M.; Bakhtar, O.R.; Mendoza, A.E.; et al. Digital Whole Slide Imaging Compared with Light Microscopy for Primary Diagnosis in Surgical Pathology. Arch. Pathol. Lab. Med. 2020, 144, 1245–1253. [Google Scholar] [CrossRef]
- Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
- Mukhopadhyay, S.; Feldman, M.D.; Abels, E.; Ashfaq, R.; Beltaifa, S.; Cacciabeve, N.G.; Cathro, H.P.; Cheng, L.; Cooper, K.; Dickey, G.E.; et al. Whole Slide Imaging Versus Microscopy for Primary Diagnosis in Surgical Pathology: A Multicenter Blinded Randomized Noninferiority Study of 1992 Cases (Pivotal Study). Am. J. Surg. Pathol. 2018, 42, 39–52. [Google Scholar] [CrossRef]
- Blessin, N.C.; Yang, C.; Mandelkow, T.; Raedler, J.B.; Li, W.; Bady, E.; Simon, R.; Vettorazzi, E.; Lennartz, M.; Bernreuther, C.; et al. Automated Ki-67 labeling index assessment in prostate cancer using artificial intelligence and multiplex fluorescence immunohistochemistry. J. Pathol. 2023, 260, 5–16. [Google Scholar] [CrossRef]
- Bustos, A.; Payá, A.; Torrubia, A.; Jover, R.; Llor, X.; Bessa, X.; Castells, A.; Carracedo, Á.; Alenda, C. xDEEP-MSI: Explainable Bias-Rejecting Microsatellite Instability Deep Learning System in Colorectal Cancer. Biomolecules 2021, 11, 1786. [Google Scholar] [CrossRef]
- Basak, K.; Ozyoruk, K.B.; Demir, D. Whole Slide Images in Artificial Intelligence Applications in Digital Pathology: Challenges and Pitfalls. Turk. J. Pathol. 2023, 39, 101–108. [Google Scholar] [CrossRef] [PubMed]
- McGenity, C.; Clarke, E.L.; Jennings, C.; Matthews, G.; Cartlidge, C.; Freduah-Agyemang, H.; Stocken, D.D.; Treanor, D. Artificial intelligence in digital pathology: A systematic review and meta-analysis of diagnostic test accuracy. Npj Digit. Med. 2024, 7, 114. [Google Scholar] [CrossRef]
- Parvin, N.; Joo, S.W.; Jung, J.H.; Mandal, T.K. Multimodal AI in Biomedicine: Pioneering the Future of Biomaterials, Diagnostics, and Personalized Healthcare. Nanomaterials 2025, 15, 895. [Google Scholar] [PubMed]
- Xu, H.; Usuyama, N.; Bagga, J.; Zhang, S.; Rao, R.; Naumann, T.; Wong, C.; Gero, Z.; González, J.; Gu, Y.; et al. A whole-slide foundation model for digital pathology from real-world data. Nature 2024, 630, 181–188. [Google Scholar] [CrossRef]
- McKay, F.; Williams, B.J.; Prestwich, G.; Bansal, D.; Hallowell, N.; Treanor, D. The ethical challenges of artificial intelligence-driven digital pathology. J. Pathol. Clin. Res. 2022, 8, 209–216. [Google Scholar] [CrossRef] [PubMed]
- Smith, B.; Hermsen, M.; Lesser, E.; Ravichandar, D.; Kremers, W. Developing image analysis pipelines of whole-slide images: Pre- and post-processing. J. Clin. Transl. Sci. 2020, 5, e38. [Google Scholar] [CrossRef]
- Faa, G.; Castagnola, M.; Didaci, L.; Coghe, F.; Scartozzi, M.; Saba, L.; Fraschini, M. The quest for the application of artificial intelligence to whole slide imaging: Unique prospective from new advanced tools. Algorithms 2024, 17, 254. [Google Scholar] [CrossRef]
- Pantanowitz, L.; Sharma, A.; Carter, A.B.; Kurc, T.; Sussman, A.; Saltz, J. Twenty Years of Digital Pathology: An Overview of the Road Travelled, What is on the Horizon, and the Emergence of Vendor-Neutral Archives. J. Pathol. Inform. 2018, 9, 40. [Google Scholar]
- Dunn, B.E.; A Almagro, U.; Choi, H.; Sheth, N.K.; Arnold, J.S.; Recla, D.L.; A Krupinski, E.; Graham, A.R.; Weinstein, R.S. Dynamic-robotic telepathology: Department of Veterans Affairs feasibility study. Hum. Pathol. 1997, 28, 8–12. [Google Scholar] [CrossRef] [PubMed]
- Redlich, J.P.; Feuerhake, F.; Weis, J.; Schaadt, N.S.; Teuber-Hanselmann, S.; Buck, C.; Luttmann, S.; Eberle, A.; Nikolin, S.; Appenzeller, A.; et al. Applications of artificial intelligence in the analysis of histopathology images of gliomas: A review. Npj Imaging 2024, 2, 16. [Google Scholar] [CrossRef]
- Bahadir, C.D.; Omar, M.; Rosenthal, J.; Marchionni, L.; Liechty, B.; Pisapia, D.J.; Sabuncu, M.R. Artificial intelligence applications in histopathology. Nat. Rev. Electr. Eng. 2024, 1, 93–108. [Google Scholar] [CrossRef]
- Go, H. Digital Pathology and Artificial Intelligence Applications in Pathology. Brain Tumor Res. Treat. 2022, 10, 76–82. [Google Scholar] [CrossRef]
- Fraggetta, F.; Yagi, Y.; Garcia-Rojo, M.; Evans, A.J.; Tuthill, J.M.; Baidoshvili, A.; Hartman, D.J.; Fukuoka, J.; Pantanowitz, L. The Importance of eSlide Macro Images for Primary Diagnosis with Whole Slide Imaging. J. Pathol. Inform. 2018, 9, 46. [Google Scholar] [CrossRef] [PubMed]
- Girolami, I.; Pantanowitz, L.; Marletta, S.; Brunelli, M.; Mescoli, C.; Parisi, A.; Barresi, V.; Parwani, A.; Neil, D.; Scarpa, A.; et al. Diagnostic concordance between whole slide imaging and conventional light microscopy in cytopathology: A systematic review. Cancer Cytopathol. 2019, 128, 17–28. [Google Scholar] [CrossRef]
- El-Omari, N.K.T.; Alzaghal, M.H. The role of open big data within the public sector, case study: Jordan. In Proceedings of the 2017 8th International Conference on Information Technology (ICIT) 2017, Amman, Jordan, 17–18 May 2017; pp. 182–186. [Google Scholar]
- Forsch, S.; Klauschen, F.; Hufnagl, P.; Roth, W. Artificial Intelligence in Pathology. Dtsch. Ärzteblatt Int. 2021, 118, 194–204. [Google Scholar] [CrossRef] [PubMed]
- Zughoul, B.; El Omari, N.K.T.; Al Refai, M. Using deep learning methods in detecting the critical success factors on the implementation of cloud ERP. Int. J. Bus. Inf. Syst. 2023, 44, 219–248. [Google Scholar] [CrossRef]
- Jariyapan, P.; Pora, W.; Kasamsumran, N.; Lekawanvijit, S. Digital pathology and artificial intelligence in diagnostic pathology. Malays. J. Pathol. 2025, 47, 3–12. [Google Scholar]
- Ariotta, V.; Lehtonen, O.; Salloum, S.; Micoli, G.; Lavikka, K.; Rantanen, V.; Hynninen, J.; Virtanen, A.; Hautaniemi, S. H&E image analysis pipeline for quantifying morphological features. J. Pathol. Inform. 2023, 14, 100339. [Google Scholar]
- Manuel, C.; Zehnder, P.; Kaya, S.; Sullivan, R.; Hu, F. Impact of color augmentation and tissue type in deep learning for hematoxylin and eosin image super resolution. J. Pathol. Inform. 2022, 13, 100148. [Google Scholar]
- Bankhead, P. Developing image analysis methods for digital pathology. J. Pathol. 2022, 257, 391–402. [Google Scholar] [CrossRef] [PubMed]
- Chicco, D. Ten quick tips for machine learning in computational biology. BioData Min. 2017, 10, 35. [Google Scholar] [CrossRef] [PubMed]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
- Vorontsov, E.; Bozkurt, A.; Casson, A.; Shaikovski, G.; Zelechowski, M.; Severson, K.; Zimmermann, E.; Hall, J.; Tenenholtz, N.; Fusi, N.; et al. A foundation model for clinical-grade computational pathology and rare cancers detection. Nat. Med. 2024, 30, 2924–2935. [Google Scholar] [CrossRef]
- Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Silva, V.W.K.; Busam, K.J.; Brogi, E.; Reuter, V.E.; Klimstra, D.S.; Fuchs, T.J. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 2019, 25, 1301–1309. [Google Scholar] [CrossRef]
- Bhatt, A.R.; Ganatra, A.; Kotecha, K. Cervical cancer detection in pap smear whole slide images using convNet with transfer learning and progressive resizing. PeerJ Comput. Sci. 2021, 7, e348. [Google Scholar] [CrossRef] [PubMed]
- Sambyal, D.; Sarwar, A. Recent developments in cervical cancer diagnosis using deep learning on whole slide images: An Overview of models, techniques, challenges and future directions. Micron 2023, 173, 103520. [Google Scholar] [CrossRef]
- Omar, M.; Alexanderani, M.K.; Valencia, I.; Loda, M.; Marchionni, L. Applications of digital pathology in cancer: A comprehensive review. Annu. Rev. Cancer Biol. 2024, 8, 245–268. [Google Scholar] [CrossRef]
- Kiran, N.; Sapna, F.; Kiran, F.; Kumar, D.; Raja, F.; Shiwlani, S.; Paladini, A.; Sonam, F.; Bendari, A.; Perkash, R.S.; et al. Digital Pathology: Transforming Diagnosis in the Digital Age. Cureus 2023, 15, e44620. [Google Scholar] [CrossRef]
- Naik, N.; Madani, A.; Esteva, A.; Keskar, N.S.; Press, M.F.; Ruderman, D.; Agus, D.B.; Socher, R. Deep learning-enabled breast cancer hormonal receptor status determination from base-level H&E stains. Nat. Commun. 2020, 11, 5727. [Google Scholar]
- Jain, E.; Patel, A.; Parwani, A.V.; Shafi, S.; Brar, Z.; Sharma, S.; Mohanty, S.K. Whole Slide Imaging Technology and Its Applications: Current and Emerging Perspectives. Int. J. Surg. Pathol. 2024, 32, 433–448. [Google Scholar] [CrossRef]
- Zia, S.; Yildiz-Aktas, I.Z.; Zia, F.; Parwani, A.V. An update on applications of digital pathology: Primary diagnosis; telepathology, education and research. Diagn. Pathol. 2025, 20, 17. [Google Scholar] [CrossRef]
- Ardon, O.; Labasin, M.; Friedlander, M.; Manzo, A.; Corsale, L.; Ntiamoah, P.; Wright, J.; Elenitoba-Johnson, K.; Reuter, V.E.; Hameed, M.R.; et al. Quality Management System in Clinical Digital Pathology Operations at a Tertiary Cancer Center. Lab. Investig. 2023, 103, 100246. [Google Scholar] [CrossRef]
- Schuffler, P.J.; Geneslaw, L.; Yarlagadda, D.V.K.; Hanna, M.G.; Samboy, J.; Stamelos, E.; Vanderbilt, C.; Philip, J.; Jean, M.-H.; Corsale, L.; et al. Integrated digital pathology at scale: A solution for clinical diagnostics and cancer research at a large academic medical center. J. Am. Med. Inform. Assoc. 2021, 28, 1874–1884. [Google Scholar] [CrossRef] [PubMed]
- Iwuajoku, V.; Ekici, K.; Haas, A.; Khan, M.Z.; Kazemi, A.; Kasajima, A.; Delbridge, C.; Muckenhuber, A.; Schmoeckel, E.; Stögbauer, F.; et al. An equivalency and efficiency study for one year digital pathology for clinical routine diagnostics in an accredited tertiary academic center. Virchows Arch. 2025, 487, 3–12. [Google Scholar] [CrossRef] [PubMed]
- Matias-Guiu, X.; Temprana-Salvador, J.; Lopez, P.G.; Kammerer-Jacquet, S.-F.; Rioux-Leclercq, N.; Clark, D.; Schürch, C.M.; Fend, F.; Mattern, S.; Snead, D.; et al. Implementing digital pathology: Qualitative and financial insights from eight leading European laboratories. Virchows Arch. 2025, 487, 815–826. [Google Scholar] [CrossRef] [PubMed]
- Hutchinson, J.C.; Picarsic, J.; McGenity, C.; Treanor, D.; Williams, B.; Sebire, N.J. Whole Slide Imaging, Artificial Intelligence, and Machine Learning in Pediatric and Perinatal Pathology: Current Status and Future Directions. Pediatr. Dev. Pathol. 2025, 28, 91–98. [Google Scholar]
- Evans, A.J.; Brown, R.W.; Bui, M.M.; Chlipala, E.A.; Lacchetti, C.; Milner, D.A.; Pantanowitz, L.; Parwani, A.V.; Reid, K.; Riben, M.W.; et al. Validating Whole Slide Imaging Systems for Diagnostic Purposes in Pathology. Arch. Pathol. Lab. Med. 2022, 146, 440–450. [Google Scholar]
- Mathew, T.; Niyas, S.; Johnpaul, C.I.; Kini, J.R.; Rajan, J. A novel deep classifier framework for automated molecular subtyping of breast carcinoma using immunohistochemistry image analysis. Biomed. Signal Process. Control. 2022, 76, 103657. [Google Scholar] [CrossRef]
- Niazi, M.K.K.; Yazgan, E.; Tavolara, T.E.; Li, W.; Lee, C.T.; Parwani, A.; Gurcan, M.N. Semantic segmentation to identify bladder layers from H&E Images. Diagn. Pathol. 2020, 15, 87. [Google Scholar] [CrossRef]
- van Diest, P.J.; Flach, R.N.; van Dooijeweert, C.; Makineli, S.; Breimer, G.E.; Stathonikos, N.; Pham, P.; Nguyen, T.Q.; Veta, M. Pros and cons of artificial intelligence implementation in diagnostic pathology. Histopathology 2024, 84, 924–934. [Google Scholar] [CrossRef]
- Raciti, P.; Sue, J.; Retamero, J.A.; Ceballos, R.; Godrich, R.; Kunz, J.D.; Casson, A.; Thiagarajan, D.; Ebrahimzadeh, Z.; Viret, J.; et al. Clinical Validation of Artificial Intelligence-Augmented Pathology Diagnosis Demonstrates Significant Gains in Diagnostic Accuracy in Prostate Cancer Detection. Arch. Pathol. Lab. Med. 2023, 147, 1178–1185. [Google Scholar]
- Marletta, S.; Eccher, A.; Martelli, F.M.; Santonicco, N.; Girolami, I.; Scarpa, A.; Pagni, F.; L’imperio, V.; Pantanowitz, L.; Gobbo, S.; et al. Artificial intelligence-based algorithms for the diagnosis of prostate cancer: A systematic review. Am. J. Clin. Pathol. 2024, 161, 526–534. [Google Scholar]
- Kurian, N.C.; Gann, P.H.; Kumar, N.; McGregor, S.M.; Verma, R.; Sethi, A. Deep Learning Predicts Subtype Heterogeneity and Outcomes in Luminal A Breast Cancer Using Routinely Stained Whole-Slide Images. Cancer Res. Commun. 2025, 5, 157–166. [Google Scholar] [CrossRef]
- Kusta, O.; Bearman, M.; Gorur, R.; Risør, T.; Brodersen, J.B.; Hoeyer, K. Speed, accuracy, and efficiency: The promises and practices of digitization in pathology. Soc. Sci. Med. 2024, 345, 116650. [Google Scholar] [CrossRef] [PubMed]
- Bossard, C.; Salhi, Y.; Khammari, A.; Brousseau, M.; Le Corre, Y.; Salhi, S.; Quéreux, G.; Chetritt, J.J. Risk score stratification of cutaneous melanoma patients based on whole slide images analysis by deep learning. J. Eur. Acad. Dermatol. Venereol. 2025, 39, 1500–1509. [Google Scholar] [CrossRef]
- Sun, C.; Li, B.; Wei, G.; Qiu, W.; Li, D.; Li, X.; Liu, X.; Wei, W.; Wang, S.; Liu, Z.; et al. Deep learning with whole slide images can improve the prognostic risk stratification with stage III colorectal cancer. Comput. Methods Programs Biomed. 2022, 221, 106914. [Google Scholar] [CrossRef] [PubMed]
- Wu, W.Q.; Wang, C.F.; Han, S.T.; Pan, C.F. Recent advances in imaging devices: Image sensors and neuromorphic vision sensors. Rare Met. 2024, 43, 5487–5515. [Google Scholar] [CrossRef]
- Saltz, J.; Gupta, R.; Hou, L.; Kurc, T.; Singh, P.; Nguyen, V.; Samaras, D.; Shroyer, K.R.; Zhao, T.; Batiste, R.; et al. Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images. Cell Rep. 2018, 23, 181–193.e7. [Google Scholar] [CrossRef]
- Patel, A.; Balis, U.G.; Cheng, J.; Li, Z.; Lujan, G.; McClintock, D.S.; Pantanowitz, L.; Parwani, A. Contemporary Whole Slide Imaging Devices and Their Applications within the Modern Pathology Department: A Selected Hardware Review. J. Pathol. Inform. 2021, 12, 50. [Google Scholar]
- Lotter, W.; Hassett, M.J.; Schultz, N.; Kehl, K.L.; Van Allen, E.M.; Cerami, E. Artificial Intelligence in Oncology: Current Landscape, Challenges, and Future Directions. Cancer Discov. 2024, 14, 711–726. [Google Scholar] [CrossRef]
- Baxi, V.; Edwards, R.; Montalto, M.; Saha, S. Digital pathology and artificial intelligence in translational medicine and clinical practice. Mod. Pathol. 2022, 35, 23–32. [Google Scholar]
- Zhang, D.Y.; Venkat, A.; Khasawneh, H.; Sali, R.; Zhang, V.; Pei, Z. Implementation of Digital Pathology and Artificial Intelligence in Routine Pathology Practice. Lab. Invest. 2024, 104, 102111. [Google Scholar] [CrossRef] [PubMed]
- Rodriguez, J.P.M.; Rodriguez, R.; Silva, V.W.K.; Kitamura, F.C.; Corradi, G.C.A.; de Marchi, A.C.B.; Rieder, R. Artificial intelligence as a tool for diagnosis in digital pathology whole slide images: A systematic review. J. Pathol. Inform. 2022, 13, 100138. [Google Scholar] [CrossRef] [PubMed]
- Moxley-Wyles, B.; Colling, R.; Verrill, C. Artificial intelligence in pathology: An overview. Diagn. Histopathol. 2020, 26, 513–520. [Google Scholar] [CrossRef]
- Yang, X.; Wu, J.; Chen, X. Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma. J. Clin. Med. 2023, 12, 3077. [Google Scholar] [CrossRef]
- Tizhoosh, H.R.; Pantanowitz, L. Artificial Intelligence and Digital Pathology: Challenges and Opportunities. J. Pathol. Inform. 2018, 9, 38. [Google Scholar]
- Rakha, E.A.; Toss, M.; Shiino, S.; Gamble, P.; Jaroensri, R.; Mermel, C.H.; Chen, P.-H.C. Current and future applications of artificial intelligence in pathology: A clinical perspective. J. Clin. Pathol. 2021, 74, 409–414. [Google Scholar]
- Shi, J.; Shu, T.; Wu, K.; Jiang, Z.; Zheng, L.; Wang, W.; Wu, H.; Zheng, Y. Masked hypergraph learning for weakly supervised histopathology whole slide image classification. Comput. Methods Programs Biomed. 2024, 253, 108237. [Google Scholar] [CrossRef] [PubMed]
- Tafavvoghi, M.; Bongo, L.A.; Shvetsov, N.; Busund, L.-T.R.; Møllersen, K. Publicly available datasets of breast histopathology H&E whole-slide images: A scoping review. J. Pathol. Inform. 2024, 15, 100363. [Google Scholar]
- Song, T.; Zhang, Q.; Cai, G.; Cai, M.; Qian, J. Development of machine learning and artificial intelligence in toxic pathology. Front. Comput. Intell. Syst. 2024, 6, 137–141. [Google Scholar] [CrossRef]
- Ettalibi, A.; Elouadi, A.; Mansour, A. AI and Computer Vision-based Real-time Quality Control: A Review of Industrial Applications. Procedia Comput. Sci. 2024, 231, 212–220. [Google Scholar]
- Stacke, K.; Eilertsen, G.; Unger, J.; Lundstrom, C. Measuring Domain Shift for Deep Learning in Histopathology. IEEE J. Biomed. Health Inform. 2021, 25, 325–336. [Google Scholar] [CrossRef] [PubMed]
- Deng, R.; Cui, C.; Liu, Q.; Yao, T.; Remedios, L.W.; Bao, S.; Landman, B.A.; Wheless, L.E.; Coburn, L.A.; Wilson, K.T.; et al. Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging. IST Int. Symp. Electron. Imaging 2025, 37, COIMG-132. [Google Scholar]
- Hoque, M.Z.; Keskinarkaus, A.; Nyberg, P.; Seppänen, T. Stain normalization methods for histopathology image analysis: A comprehensive review and experimental comparison. Inf. Fusion 2023, 102, 101997. [Google Scholar] [CrossRef]
- Ohnishi, C.; Ohnishi, T.; Ibrahim, K.; Ntiamoah, P.; Ross, D.; Yamaguchi, M.; Yagi, Y. Color Standardization and Stain Intensity Calibration for Whole Slide Image-Based Immunohistochemistry Assessment. Microsc. Microanal. 2024, 30, 118–132. [Google Scholar]
- Inoue, T.; Yagi, Y. Color standardization and optimization in whole slide imaging. Int. J. Clin. Diagn. Pathol. 2020, 4, 10-15761. [Google Scholar]
- Cay, N.; Mendi, B.A.R.; Batur, H.; Erdogan, F. Discrimination of lipoma from atypical lipomatous tumor/well-differentiated liposarcoma using magnetic resonance imaging radiomics combined with machine learning. Jpn. J. Radiol. 2022, 40, 951–960. [Google Scholar]
- Char, D.S.; Shah, N.H.; Magnus, D. Implementing Machine Learning in Health Care—Addressing Ethical Challenges. New Engl. J. Med. 2018, 378, 981–983. [Google Scholar] [CrossRef]
- Jin, R.; Xu, Z.; Zhong, Y.; Yao, Q.; Qi, D.; Zhou, S.K.; Li, X. FairMedFM: Fairness Benchmarking for Medical Imaging Foundation Models. Adv. Neural Inf. Process. Syst. 2024, 37, 111318–111357. [Google Scholar]
- Thurzo, A. Provable AI Ethics and Explainability in Medical and Educational AI Agents: Trustworthy Ethical Firewall. Electronics 2025, 14, 1294. [Google Scholar] [CrossRef]
- Ullah, E.; Parwani, A.; Baig, M.M.; Singh, R. Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology—A recent scoping review. Diagn. Pathol. 2024, 19, 43. [Google Scholar]
- Dolezal, J.M.; Kochanny, S.; Dyer, E.; Ramesh, S.; Srisuwananukorn, A.; Sacco, M.; Howard, F.M.; Li, A.; Mohan, P.; Pearson, A.T. Slideflow: Deep learning for digital histopathology with real-time whole-slide visualization. BMC Bioinform. 2024, 25, 134. [Google Scholar]
- Rizzo, P.C.; Girolami, I.; Marletta, S.; Pantanowitz, L.; Antonini, P.; Brunelli, M.; Santonicco, N.; Vacca, P.; Tumino, N.; Moretta, L.; et al. Technical and Diagnostic Issues in Whole Slide Imaging Published Validation Studies. Front. Oncol. 2022, 12, 918580. [Google Scholar] [CrossRef] [PubMed]
- Coulter, C.; McKay, F.; Hallowell, N.; Browning, L.; Colling, R.; Macklin, P.; Sorell, T.; Aslam, M.; Bryson, G.; Treanor, D.; et al. Understanding the ethical and legal considerations of Digital Pathology. J. Pathol. Clin. Res. 2022, 8, 101–115. [Google Scholar] [CrossRef]
- FDA. U.S. Food and Drug Administration (FDA). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. 2024. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device (accessed on 11 October 2025).
- Food and Drug Administration. FDA News Release—FDA Authorizes Marketing of First Whole Slide Imaging System for Digital Pathology; Food and Drug Administration: Silver Spring, MD, USA, 2017.
- Commission, Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU). Off. J. Eur. Union 2017. Available online: https://eur-lex.europa.eu/eli/reg/2017/746 (accessed on 24 December 2025).
- Salto-Tellez, M. Maxwell, and Hamilton, Artificial intelligence-the third revolution in pathology. Histopathology 2019, 74, 372–376. [CrossRef] [PubMed]
- Band, S.S.; Yarahmadi, A.; Hsu, C.C.; Biyari, M.; Sookhak, M.; Ameri, R.; Dehzangi, I.; Chronopoulos, A.T.; Liang, H.W. Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods. Inform. Med. Unlocked 2023, 40, 101286. [Google Scholar] [CrossRef]
- Hanna, M.G.; Pantanowitz, L.; Dash, R.; Harrison, J.H.; Deebajah, M.; Pantanowitz, J.; Rashidi, H.H. Future of Artificial Intelligence-Machine Learning Trends in Pathology and Medicine. Mod. Pathol. 2025, 38, 100705. [Google Scholar]
- Shafi, S.; Parwani, A.V. Artificial intelligence in diagnostic pathology. Diagn. Pathol. 2023, 18, 109. [Google Scholar] [CrossRef] [PubMed]
- Marletta, S.; Pantanowitz, L.; Santonicco, N.; Caputo, A.; Bragantini, E.; Brunelli, M.; Girolami, I.; Eccher, A. Application of Digital Imaging and Artificial Intelligence to Pathology of the Placenta. Pediatr. Dev. Pathol. 2023, 26, 5–12. [Google Scholar] [CrossRef] [PubMed]
- Clunie, D.A. DICOM Format and Protocol Standardization-A Core Requirement for Digital Pathology Success. Toxicol. Pathol. 2021, 49, 738–749. [Google Scholar]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |