Next Article in Journal
Feasibility and Acceptability of the Cancer-Specific PRONTO Protocol for Nutritional Risk Screening in Outpatient Oncology Cancer Care: A Pilot Study
Previous Article in Journal
Hepatitis C Virus Infection Associated with Oral Potentially Malignant Disorder, Oral Cancer, and Liver Diseases: A Community-Based Cross-Sectional Study
Previous Article in Special Issue
Metabolic Imaging in Electrochemotherapy: Insights from FDG-PET Analysis in Metastatic Melanoma—A Pilot Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

From Slide to Insight: The Emerging Alliance of Digital Pathology and AI in Melanoma Diagnostics

1
Department of Medical and Surgical Sciences (DIMEC), Alma Mater Studiorum, University of Bologna, 40138 Bologna, Italy
2
Oncologic Dermatology Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
3
Dermatology Unit, Istituto Nazionale di Riposo e Cura per Anziani, INRCA-IRCCS Hospital, 60124 Ancona, Italy
4
Dermatology & Venereology Department, University Hospital Center “Mother Theresa”, 11942 Tirana, Albania
5
Plastic Surgery, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
6
Pathology Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
*
Author to whom correspondence should be addressed.
Cancers 2025, 17(22), 3696; https://doi.org/10.3390/cancers17223696
Submission received: 29 October 2025 / Revised: 16 November 2025 / Accepted: 17 November 2025 / Published: 18 November 2025
(This article belongs to the Special Issue Novel Research on the Diagnosis and Treatment of Melanoma)

Simple Summary

Cutaneous melanoma is a potentially lethal skin cancer that can be difficult to diagnose accurately, especially in early or ambiguous cases. Traditional histopathology relies on expert evaluation of tissue slides, but this process is subjective and prone to variability. With the rise of digital pathology and artificial intelligence (AI), there is growing interest in using computational tools to assist melanoma diagnosis. This review explores how AI—particularly deep learning and interpretable models—can analyze digital slides, extract diagnostic features, and even predict genetic mutations from routine images. By summarizing recent advances across classification, spatial modeling, and explainable AI, this work highlights how these tools can improve diagnostic accuracy, reduce workload, and support decision-making. Our goal is to inform researchers, clinicians, and pathologists of the current state of AI-assisted melanoma diagnostics and guide future studies toward more robust, clinically integrated solutions.

Abstract

Background: Cutaneous melanoma (CM) poses significant diagnostic challenges due to its biological heterogeneity and the subjective interpretation of histopathologic criteria. While early and accurate diagnosis remains critical for patient outcomes, conventional pathology is limited by interobserver variability and diagnostic ambiguity, especially in borderline lesions. Objective: This narrative review explores the integration of digital pathology (DP) and artificial intelligence (AI)—including deep learning (DL), machine learning (ML), and interpretable models—into the histopathologic workflow for CM diagnosis. Methods: We systematically searched PubMed, Scopus, and Web of Science (2013–2025) for studies using whole slide imaging (WSI) and AI to assist melanoma diagnosis. We categorized findings across five domains: WSI-based classification models, feature extraction (e.g., mitoses, ulceration), spatial modeling and TIL analysis, molecular prediction (e.g., BRAF mutation), and interpretable pipelines based on nuclei morphology. Results: We included 87 studies with diverse AI methodologies. Convolutional neural networks (CNNs) achieved diagnostic accuracy comparable to expert dermatopathologists. U-Net and Mask R-CNN models enabled robust detection of critical histologic features, while nuclei-level analyses offered explainable classification strategies. Spatial and morphometric modeling allowed quantification of tumor–immune interactions, and select models inferred molecular alterations directly from H&E slides. However, generalizability remains limited due to small, homogeneous datasets and lack of external validation. Conclusions: AI-enhanced digital pathology holds transformative potential in CM diagnosis, offering accuracy, reproducibility, and interpretability. Yet, clinical integration requires multicentric validation, standardized protocols, and attention to workflow, ethical, and medico-legal challenges. Future developments, including multimodal AI and integration into molecular tumor boards, may redefine diagnostic precision in melanoma.

1. Introduction

Cutaneous melanoma (CM) is one of the most aggressive and biologically complex skin malignancies, accounting for a disproportionate number of skin cancer-related deaths despite representing a minority of total cases [1,2]. The global incidence of CM has been steadily increasing over the past decades, particularly among fair-skinned populations, with substantial variation in clinical behavior depending on histological subtype, anatomic location, and stage at diagnosis [3,4]. Early detection remains critical, as prognosis dramatically improves when melanoma is identified and treated at thin, localized stages [5,6,7]. However, accurate histopathological diagnosis—especially in early or atypical lesions—remains a major challenge [8]. The gold standard for melanoma diagnosis is the microscopic examination of hematoxylin and eosin (H&E)-stained tissue by dermatopathologists [9]. This assessment requires expert integration of multiple criteria, including cytological atypia, architectural disorder, mitotic rate, and the nature of tumor–stromal interactions [10,11,12,13]. However, the complexity of melanocytic lesions and the inherently subjective nature of histologic interpretation contribute to significant inter- and intra-observer variability [14]. Studies have reported diagnostic discordance rates ranging from 10% to 25% among expert pathologists, especially in difficult categories such as spitzoid lesions, nevoid melanoma, and severely dysplastic nevi [15,16]. The transition from glass slides to high-resolution whole slide images (WSIs) enables computational approaches to perform detailed quantitative assessments of tissue architecture and cellular morphology [17,18]. In parallel, advances in machine learning (ML)—particularly deep learning models such as convolutional neural networks (CNNs) and U-Net architectures—have demonstrated strong performance in cancer diagnostics, including melanoma [19,20,21,22]. These artificial intelligence (AI) systems offer multiple advantages: they can standardize diagnostic evaluation, provide real-time decision support, extract novel spatial and morphometric biomarkers, and facilitate scalable pathology services in under-resourced settings [23,24,25]. Moreover, AI is increasingly being explored not only for primary diagnosis but also for predicting genomic alterations (e.g., BRAF mutations), tumor-infiltrating lymphocytes (TIL) burden, and even treatment response [26,27,28]. Despite this promise, real-world adoption of AI in dermatopathology remains limited. Challenges include model interpretability, dataset variability, regulatory approval, integration into clinical workflows, and acceptance by pathologists. The field must also grapple with ethical and medico-legal considerations around AI-assisted diagnostics [29,30,31]. This review aims to provide a comprehensive and critical synthesis of the current role of digital pathology (DP) and AI in the histopathologic diagnosis of cutaneous melanoma. We discuss the evolution of AI applications from simple classification tasks to advanced spatial modeling and prognostic prediction. Emphasis is placed on recent developments in explainable machine learning, such as nuclei-level morphometric pipelines, and the translational implications of integrating AI into melanoma care. By highlighting current capabilities, limitations, and future directions, we aim to contextualize the growing synergy between computational tools and expert pathology in the era of precision dermatology.

2. Materials and Methods

This study was conducted as a structured narrative review aiming to synthesize and critically appraise the current literature on the integration of DP and AI techniques in the histopathological diagnosis of cutaneous melanoma. Emphasis was placed on studies employing WSI, machine learning (ML), and deep learning (DL) methods—including both black-box and interpretable models—targeting melanoma classification, feature extraction, molecular prediction and interpretable AI approaches including nuclei-level models.
A comprehensive literature search was performed across three major databases:
  • PubMed/MEDLINE
  • Scopus
  • Web of Science
The search covered the period from January 2013 to August 2025, using combinations of controlled vocabulary (MeSH terms) and free-text terms. The following Boolean search string was applied: (“melanoma” OR “cutaneous melanoma”) AND (“digital pathology” OR “whole slide imaging” OR “WSI”) AND (“artificial intelligence” OR “machine learning” OR “deep learning” OR “convolutional neural networks” OR “AI” OR “neural network” OR “U-Net” OR “pathomics” OR “computational pathology” OR “morphology” OR “spatial organization”). Studies were included if they met the following criteria: (1) original research applying AI or machine learning techniques to the histopathological diagnosis of cutaneous melanoma using digital pathology inputs; (2) used whole slide images (WSIs), image tiles, or nuclei-level segmentation derived from H&E-stained tissue; (3) reported diagnostic or predictive performance metrics (e.g., accuracy, AUC, sensitivity, specificity, F1 score); (4) were published in English in peer-reviewed journals. Reviews, editorials, case reports, and studies lacking technical or clinical validation were excluded. Following deduplication, titles and abstracts were screened for eligibility. Full texts of potentially relevant articles were then assessed independently by three reviewers (FV, GV and AG). Discrepancies were resolved by consensus. Data were extracted on study design, dataset characteristics, AI methodology (e.g., CNN, U-Net, MIL, LDA), diagnostic task (e.g., classification, segmentation, mutation prediction), performance metrics, model explainability, and external validation. Due to methodological heterogeneity, a meta-analysis was not feasible. Instead, findings were synthesized qualitatively across five conceptual domains: (1) WSI-based AI classification of melanoma; (2) feature extraction (mitoses, ulceration, Breslow thickness); (3) spatial modeling and TIL analysis; (4) molecular prediction using AI; (5) interpretable and nuclei-level AI approaches. Emphasis was placed on clinical relevance, reproducibility, generalizability, and translational potential. Limitations including dataset bias, lack of standardization, and integration challenges were also noted. Where applicable, recent systematic reviews and meta-analyses were used to support pooled sensitivity/specificity data. No ethics approval was required for this literature-based study (Figure 1).

3. Results

A total of 87 studies published between January 2013 and August 2025 met the inclusion criteria and were analyzed across five key domains: WSI-based classification models, histologic feature extraction, spatial modeling including TIL quantification, molecular prediction, and interpretable AI approaches focused on nuclei-level analysis. Collectively, these studies illustrate a rapidly evolving field at the intersection of computational pathology and melanoma diagnostics. The integration of DP and AI in the histopathological diagnosis of cutaneous melanoma has advanced rapidly, with a particular emphasis on WSI, ML, and DL approaches. WSI enables high-resolution digitization of entire histopathology slides, facilitating computational analysis and remote review. DL, especially CNNs, has demonstrated high diagnostic accuracy for melanoma classification, often matching or exceeding human pathologists in sensitivity and specificity, though variability in image acquisition and annotation remains a challenge [17,31,32,33,34,35,36]. Studies employing both “black-box” and interpretable models have shown that automated algorithms can reliably distinguish melanoma from benign melanocytic lesions, with pooled sensitivity and specificity approaching 90% and 92%, respectively, in meta-analyses [17,33,35]. Feature extraction using DL models has enabled identification of histopathological patterns and prognostic markers, while emerging work in molecular prediction leverages image-based surrogates for genetic and mutational status, though these applications remain investigational [34,35,37,38].

3.1. WSI-Based AI Classification of Melanoma

WSI has enabled large-scale digitalization of histopathological slides, providing a foundation for the application of AI-based classification systems [39]. Particularly, WSI-based artificial intelligence AI for melanoma classification has rapidly advanced [34,35,37,38], with multiple systematic reviews and meta-analyses demonstrating high diagnostic accuracy. Recent meta-analyses report pooled sensitivities and specificities for automated image analysis algorithms applied to melanoma histology in the range of 89–92% and 90–94%, respectively, with area under the curve (AUC) values up to 0.96–0.98 for hybrid and deep learning models [33,40,41]. Hekler et al. developed a deep CNN that classified melanoma vs. nevus with an AUC > 0.94 [42]. Large reader studies confirm that CNNs outperform the majority of dermatologists in both sensitivity and specificity, with CNNs achieving sensitivities of 82–87% and specificities of 77–86%, compared to dermatologists sensitivities of 67–89% and specificities of 60–75% [22,43]. For histopathological melanoma diagnosis, CNNs have demonstrated higher accuracy than panels of expert pathologists, with CNNs achieving 68% accuracy versus 59% for pathologists in challenging image-based tasks [44]. However, real-world reproducibility and accuracy among pathologists remain variable, with consensus panel accuracy for invasive melanoma at 72–82% and lower for early-stage or ambiguous lesions [45]. Attention-based pooling strategies and multi-instance learning (MIL) frameworks further improved diagnostic localization, enabling the models to focus on the most histologically relevant regions [18,46]. These results suggest that AI models, particularly CNNs and hybrid approaches, can match or exceed the diagnostic performance of clinicians in controlled settings. WSI-based AI models have shown particular promise in distinguishing melanoma from benign melanocytic lesions, with some studies reporting superior or at least equivalent performance compared to experienced dermatopathologists [33,36,47]. However, most studies are limited by small, homogeneous datasets, lack of external validation, and artificial test settings that do not fully represent real-world clinical diversity or workflow [17,32,47]. Recent multicenter initiatives have begun to address this issue by applying federated learning strategies and cross-institutional datasets that preserve patient privacy while enhancing model generalizability. Consensus is emerging that external validation should include: (i) at least one independent dataset from a different institution; (ii) inclusion of rare subtypes and variable histologic patterns; and (iii) evaluation across diverse patient populations, including different skin phototypes and age groups. Studies failing to meet these criteria may risk overfitting and limited translational relevance.
Variability in image acquisition, pre-processing, and annotation standards further limits generalizability and reproducibility [17,32]. For instance, Maron et al. demonstrated that CNN performance dropped substantially when exposed to out-of-distribution (OOD) data, such as images with artificial corruptions (e.g., blur, noise, brightness changes) or minor perturbations (e.g., small rotations, zooms). In their benchmark, the mean accuracy of four CNN architectures decreased from 85 to 88% on unmodified images to as low as 65–70% on corrupted or perturbed images, representing a performance drop of up to 23 percentage points due to image quality variability [48]. Similarly, Maron et al. found that even minor, clinically irrelevant changes in image acquisition (e.g., different angles, lighting, or zoom) led to inconsistent predictions, with CNNs showing “brittleness”—the probability of a model changing its diagnosis for the same lesion across different images ranged from 10% to 30% [49]. Schmitt et al. quantified the impact of hidden batch effects in digital pathology: CNNs could learn non-biological variables such as slide origin or scanner type, achieving up to 100% accuracy in distinguishing slides by origin, which can confound diagnostic predictions and reduce generalizability [50]. Cho et al. further showed that CNNs trained on high-quality, standardized images performed well, but their accuracy dropped markedly on unstandardized, out-of-focus, or poorly lit clinical photographs, with performance reductions of 10–20 percentage points in AUC or sensitivity [51]. These findings underscore that CNN diagnostic accuracy for melanoma can decrease by 10–23 percentage points or more when exposed to variability in image quality, acquisition parameters, or hidden batch effects, highlighting the need for robust model development and diverse, well-curated datasets [48,49,50,51]. Despite these limitations, AI-assisted WSI analysis is recognized as a valuable adjunct to pathologists, improving workflow efficiency and providing a reliable second opinion, especially for less experienced clinicians. However, current consensus is that these tools should not replace expert histopathological assessment but rather serve as decision support, pending further validation in diverse, prospective clinical cohorts and standardization of methodologies [17,32,33]. Key challenges remain in model explainability, integration into clinical workflows, and ensuring robust performance across populations and melanoma subtypes. Commonly reported limitations or biases in studies using whole slide imaging (WSI) for the classification of melanoma include the interobserver variability and reference standard bias. There is substantial discordance among pathologists in classifying melanocytic lesions, with reported rates up to 25%, which complicates the establishment of a reliable ground truth for AI training and validation. This variability can introduce bias in both the development and assessment of WSI-based AI models [52]. Moreover, WSI-based interpretation is less accurate for certain lesion classes, especially intermediate or ambiguous lesions (e.g., class III lesions), leading to higher rates of discordance and potential misclassification between benign and malignant categories [53]. A further critical limitation is the lack of sufficient, high-quality annotated datasets for rare or diagnostically challenging melanoma subtypes, such as desmoplastic melanoma, Spitzoid lesions, and other tumors within the biologic gray zone. These subtypes are both uncommon and marked by substantial interobserver variability, making consistent ground truth labeling difficult. As a result, most existing AI models are trained predominantly on common melanocytic lesions and may perform suboptimally on rare or ambiguous cases—precisely those that most require diagnostic support. This underrepresentation substantially limits robustness, external validity, and real-world applicability, highlighting the urgent need for larger, multicenter, pathologist-curated datasets that adequately capture the full histopathological diversity of melanoma. Results are displayed and integrated in Table 1.

3.2. Feature Extraction: Mitoses, Ulceration, and Tumor Thickness

Several studies used semantic segmentation and object detection techniques to extract key histopathological features such as mitotic figures, ulceration, and Breslow thickness—parameters critical for melanoma staging and prognosis [54,55]. U-Net and Mask R-CNN architectures were commonly applied, demonstrating strong performance in identifying mitoses, delineating the epidermal–dermal junction, and quantifying ulcerated regions [56]. Automated Breslow thickness estimation showed excellent correlation with pathologist assessments in multiple studies, suggesting its potential utility in quality control and standardization [54,57]. These features are critical for diagnosis and prognostication, as emphasized in the clinical literature [58]. Automated detection of mitotic figures remains challenging due to their rarity and morphological variability. Deep learning models, particularly convolutional neural networks (CNNs), have shown promise in identifying mitotic figures, but performance is still inferior to expert pathologists in complex cases. Studies highlight the need for larger annotated datasets and improved interpretability to enhance reliability [34,35]. Feature extraction pipelines often use nuclei segmentation and morphological analysis to approximate mitotic activity, but validation against manual counts is essential [34,59]. AI-based approaches for ulceration detection typically rely on segmentation algorithms to delineate the epidermal surface and identify areas of tissue disruption. While some models achieve high sensitivity and specificity, ulceration remains a less frequently targeted feature in current AI literature compared to tumor thickness and mitoses. Integration of clinical metadata and multimodal imaging may improve accuracy [34,35,60]. Automated measurement of Breslow thickness is a major focus, given its prognostic significance. Deep learning models can segment tumor boundaries and estimate thickness with high concordance to manual assessment, though challenges persist with poorly demarcated lesions and artifacts [33,34,35]. Recent systematic reviews report pooled sensitivities and specificities above 90% for automated image analysis in melanoma histology, but emphasize heterogeneity and the need for further validation before clinical adoption [33,35]. Overall, AI and digital pathology offer high accuracy and reproducibility for feature extraction in melanoma, but limitations include interobserver variability, data heterogeneity, and lack of standardized protocols. Continued collaboration between pathologists and computer scientists is essential to address these challenges and facilitate clinical integration [31,35,61]. Results are displayed and integrated in Table 1.

3.3. Spatial Modeling and Tumor-Infiltrating Lymphocyte (TIL) Analysis

The integration of AI with spatial histopathology enables quantification of tumor architecture and the immune microenvironment [59]. Several studies leveraged deep learning to detect and spatially map TILs across melanoma sections with improved reproducibility, objectivity, and prognostic value compared to traditional manual assessment. Manual TIL scoring, typically performed by pathologists using H&E-stained slides, suffers from significant interobserver variability and limited consensus on grading systems, which restricts its prognostic utility in melanoma [62,63,64,65]. Models using CNNs and graph-based representations were able to identify immune cell clusters and their proximity to tumor nests—features increasingly recognized as predictive of immunotherapy response. Moore et al. demonstrated that AI-derived TIL spatial patterns correlated with transcriptomic immune signatures, while other groups showed associations with progression-free survival and overall survival in treated patients [27,64]. Spatial modeling frameworks, including those integrating histopathology with transcriptomics, enable quantitative characterization of the TME, revealing spatial cellular architectures and immune infiltration patterns that are not apparent from molecular data alone. These spatial features can distinguish microenvironment subtypes and predict patient prognosis, enhancing the interpretability and clinical relevance of computational histopathology [66]. Additionally, machine learning models leveraging nuclei morphology and spatial organization have shown promise in automated melanoma detection, providing interpretable results that align with established histopathological criteria and supporting clinical decision-making [59]. Systematic reviews confirm that deep learning and image analysis algorithms achieve high diagnostic accuracy (mean sensitivity and specificity >90%) in melanoma histopathology, though further validation and standardization are needed for clinical integration [17,33,34,35]. Multiplexed immunohistochemistry and digital pathology preserve spatial context and enable in situ single-cell profiling, facilitating detailed study of cell–cell interactions and tissue architecture relevant to immunotherapy response [67]. Emerging 3D histology models may further enhance spatial assessment, though technical limitations remain [68]. Results are displayed and integrated in Table 1.
Table 1. Diagnostic performance of AI models across key histopathological tasks in melanoma.
Table 1. Diagnostic performance of AI models across key histopathological tasks in melanoma.
TaskModel TypesAccuracy (Range)AUC (Range)No. of Studies
Melanoma ClassificationCNN, MIL0.85–0.960.91–0.9818
Mitosis DetectionU-Net, Mask R-CNN0.78–0.880.83–0.8911
Breslow Thickness EstimationCNN, Regression0.86–0.920.90–0.959
Ulceration DetectionSegmentation0.78–0.850.80–0.887
TIL AnalysisGraph CNN, Spatial Maps0.84–0.900.88–0.9310
Molecular PredictionCNN, Multimodal Fusion0.75–0.820.80–0.866
This table synthesizes representative performance ranges reported in studies published between 2013 and 2025: melanoma classification, mitosis detection, Breslow thickness estimation, ulceration detection, tumor-infiltrating lymphocyte (TIL) analysis, and molecular prediction. Data synthesized from 61 studies published between 2013 and 2025. Convolutional neural networks (CNNs), U-Net, Mask R-CNN, and hybrid deep learning approaches were the most frequently used architectures. Performance metrics reflect retrospective, mostly single-institution cohorts.

3.4. Molecular Prediction from Histopathology

Emerging studies explored the use of AI to predict genomic alterations directly from digitized histologic slides. Emerging studies show that DL models can infer molecular features such as BRAF mutation status and other actionable genomic alterations directly from routine hematoxylin and eosin (H&E) slides, although these applications remain investigational and require further validation before clinical integration [69,70]. Multiplexed immunohistochemistry combined with digital pathology enables spatial mapping of the tumor microenvironment, supporting personalized therapy decisions and molecular characterization [67,71]. Coudray et al. first demonstrated genomic prediction across cancers; subsequent melanoma specific works achieved 75–85% accuracy for BRAF status [72,73,74,75,76]. Morphological surrogates (e.g., pagetoid spread, cytologic atypia, and architectural patterns) were hypothesized as mediating features. More recent models trained on pan-cancer datasets have expanded their scope to predict transcriptomic programs such as immune evasion, angiogenesis, or MAPK pathway activation [77]. These findings support the growing interest in “molecular histopathology”—the use of AI to infer biologic phenotype from morphology alone. Nevertheless, performance remains inferior to gold-standard molecular techniques, and these models currently serve as adjunctive screening tools rather than replacements. The literature emphasizes the need for high-quality, diverse datasets and collaboration between pathologists, computer scientists, and bioinformaticians to address these challenges and ensure safe, effective translation of AI tools into routine practice [31,61]. Results are displayed and integrated in Table 1.

3.5. Interpretable AI and Nuclei-Level Feature Models

A key advancement in recent years has been the development of interpretable machine learning pipelines that mirror the reasoning of trained pathologists. These models focus on extracting quantifiable nuclear morphologic and spatial features from whole-slide images, emulating the cellular-level assessment performed by pathologists. For example, machine learning approaches that segment nuclei and synthesize geometric, morphologic, and spatial variables have shown robust performance in distinguishing melanoma from nevi, with interpretability grounded in established histopathological criteria. Such models enable clinicians to verify and understand the diagnostic process, supporting clinical decision-making and prioritization of complex cases [59].One notable example is the work by Veronesi et al., who proposed a method combining U-Net-based nuclear segmentation with linear discriminant analysis (LDA) on spatial and morphologic features [59]. Their pipeline extracted over 6 million nuclei from WSIs, quantifying 44 nuclear and spatial variables, including area, eccentricity, clustering, and local heterogeneity. The model achieved an accuracy of 90.4%, with a sensitivity of 84.4% and precision of 86.5%, while maintaining full interpretability at the feature level. Importantly, variables such as nuclear pleomorphism, spacing irregularity, and anisotropy—long appreciated by expert dermatopathologists—were among the strongest discriminators between melanoma and nevi. Systematic reviews confirm that deep learning and machine learning algorithms applied to digital histopathology images of melanoma achieve high diagnostic accuracy, with pooled sensitivity and specificity approaching 90% and 92%, respectively. However, these studies highlight heterogeneity in methodology, limited external validation, and challenges in generalizability across diverse populations and image acquisition protocols [32,34].

3.6. Molecular Tumor Board and AI in Melanoma

Alongside diagnostic applications, AI is increasingly being integrated into clinical decision-making frameworks, particularly within the Molecular Tumor Board (MTB), which represents a cornerstone of precision oncology in advanced melanoma. The MTB enables the multidisciplinary integration of clinical, pathological, and molecular data to support personalized therapeutic strategies [78]. The growing adoption of next-generation sequencing (NGS) has expanded both the diagnostic potential and the interpretative complexity of molecular results, often exceeding the capacity of conventional decision-making processes [79]. In this context, AI can substantially enhance MTB activities through different algorithmic approaches. Machine learning and deep learning models can automate mutation detection from sequencing data, predict their functional relevance, and assist in classifying variants of uncertain significance (VUS) by integrating molecular, clinical, and histopathological parameters [80]. Supervised algorithms and deep neural networks are already employed to discriminate driver from passenger mutations, assessing their impact on signaling pathways and their potential therapeutic value [81,82]. In parallel, semi-supervised learning and clustering models can group variants with similar molecular profiles, facilitating automated interpretation [83]. A further step forward is represented by multimodal models, capable of correlating genomic and transcriptomic data with clinical, histopathological, and radiological information to generate therapy-response predictions and identify candidates for clinical trials [84]. An additional and emerging field where the MTB–AI alliance may provide substantial benefit concerns the management of melanocytic lesions within the biologic gray zone, encompassing melanocytomas, MELTUMPs, and other non-conventional melanocytic tumors [85,86]. These entities, which exhibit overlapping morphologic, molecular, and biological features between benign nevi and overt melanomas, remain a major source of diagnostic uncertainty and therapeutic ambiguity. Here, AI-driven integrative analytics could play a transformative role by learning latent molecular–morphologic signatures that escape human perception and correlating them with clinical outcomes. Such approaches could help standardize risk stratification and improve reproducibility across experts, supporting the MTB in distinguishing lesions with intermediate biological potential, such as atypical Spitz tumors or BAP1-inactivated melanocytic tumors, from true melanomas [15,87]. By leveraging explainable AI and multimodal fusion of dermoscopic, histologic, genomic, and clinical data, MTBs may move toward a data-informed and biologically grounded redefinition of the gray zone, bridging diagnostic uncertainty and therapeutic precision [88,89]. However, the adoption of AI within MTBs requires rigorous clinical validation, algorithmic transparency, and continuous monitoring, as clinical safety and trust remain central to decision-making [88]. To be acceptable in clinical practice, AI models must incorporate interpretable components and always leave the final decision to the clinician. The use of explainability tools or feature contribution analyses, helps clarify why an algorithm suggests a given strategy or variant interpretation [90,91].

3.7. Workflow Integration, Accessibility, and Open-Source Tools

While technical performance is promising, real-world integration of AI into histopathologic workflows remains limited. Workflow integration remains a key challenge. Most AI systems are still investigational, with limited real-world deployment due to issues such as robustness across diverse datasets, lack of external validation, and the need for seamless interoperability with laboratory information systems [92,93]. Effective integration requires collaboration between pathologists, computer scientists, and engineers, as well as standardized reporting and validation protocols. Accessibility is improving with the proliferation of public datasets (e.g., ISIC, HAM10000), but generalizability is limited by underrepresentation of skin of color and non-Western populations [32,92].
Beyond technical validation, regulatory approval represents a major barrier to clinical implementation. In most jurisdictions, AI models intended for diagnostic use must undergo rigorous review to obtain regulatory clearance. However, the absence of standardized evaluation frameworks and the complexity of AI behavior (e.g., adaptive algorithms, black-box models) have made these pathways uncertain and often prohibitively time-consuming. In parallel, logistical bottlenecks persist. Many pathology labs—particularly in community or non-academic settings—lack access to WSI scanners, high-performance computational infrastructure, and IT support required for real-time inference and integration with digital workflow systems. The costs of digitization, data storage, and personnel training further complicate large-scale adoption. As such, AI solutions remain largely confined to research institutions and early-adopter centers. Addressing these barriers will require dedicated funding, collaborative infrastructure initiatives, and regulatory innovation.
Open-source tools are increasingly available, exemplified by the development of 3D histology models using open-source software, which enhance anatomical assessment and may enter routine practice within the next decade. However, computational limitations and data privacy concerns persist. In addition to these challenges, widespread implementation of AI-assisted digital pathology is hindered by the substantial infrastructure required to support high-throughput workflows. These include high-resolution WSI scanners, dedicated servers or cloud-based high-performance computing resources for model training and inference, and scalable storage solutions capable of managing terabyte-scale slide repositories. Such systems also require robust network bandwidth, integration with laboratory information systems (LIS), and ongoing technical support. These infrastructural demands represent a significant financial and logistical barrier for many institutions—particularly smaller hospitals, community practices, and centers in low-resource settings—thereby limiting equitable adoption of AI technologies in melanoma diagnostics. The latest evidence demonstrated that artificial intelligence models for melanoma histopathology achieve high diagnostic accuracy in external validation studies, but generalizability across diverse populations and real-world clinical settings remains limited. Recent multicenter studies using federated learning approaches have shown robust external performance, with AUROC values exceeding 0.91 on out-of-distribution datasets that include a broad spectrum of melanoma subtypes, anatomical sites, and patient ages. These models, trained prospectively across multiple institutions, effectively minimize selection bias and better reflect real-world clinical heterogeneity, supporting their reliability for routine diagnostic use [52]. However, most datasets are still derived from European and North American populations, and representation of skin of color (Fitzpatrick IV-VI) and non-Western populations is insufficient, limiting the applicability of these models globally [32,94]. For deep learning models to be effectively implemented in clinical practice, they must be trained on datasets from diverse sources. The greater the variety of data a model is exposed to, the better it can generalize and accurately predict outcomes on new, unseen cases. Meta-analyses and umbrella reviews confirm that deep learning and hybrid models consistently outperform or match experienced dermatologists in diagnostic accuracy, with pooled sensitivities and specificities around 89–92% [40,41]. Nevertheless, studies highlight that external validation is infrequent, and performance may decrease when models are applied to populations or image sources not represented in the training data [32,92,94]. There is a critical need for standardized reporting, inclusion of diverse patient cohorts, and transparent benchmarking to ensure equitable and effective deployment in clinical practice. Despite promising results, the reproducibility and generalizability of AI models remain major challenges in melanoma histopathology. A core limitation is the lack of standardized validation metrics and benchmarking protocols across studies. Many works report only internal performance without external testing, use inconsistent definitions of sensitivity or accuracy, or lack calibration metrics such as confidence intervals and error margins. This heterogeneity makes direct comparison difficult and limits the establishment of performance baselines for clinical translation. Moreover, published models often underperform when evaluated on out-of-distribution datasets, highlighting their limited robustness to variations in staining, scanner type, or population demographics. To address this, expert consensus on core outcome metrics, minimum dataset diversity requirements, and independent external validation should be prioritized. Without such harmonization, clinical implementation will remain fragmented and prone to error. Furthermore, a key challenge with AI is that its outputs often do not intuitively align with traditional clinical reasoning or human logic. These models frequently function as “black boxes”, generating results without providing clear explanations for how specific conclusions are reached. This lack of interpretability undermines trust in AI systems and poses a significant barrier to their adoption in clinical practice. Another critical human factor challenge in adopting AI-assisted melanoma diagnostics is automation bias—the tendency of clinicians to overly rely on algorithmic outputs, even in the face of conflicting clinical or pathological data. This is particularly problematic in the evaluation of complex melanocytic lesions, such as atypical Spitz tumors, desmoplastic melanomas, or MELTUMPs, where nuanced interpretation and clinical-pathological correlation remain essential. Over-reliance on AI in these cases can lead to diagnostic complacency and potentially harmful misclassification. To prevent this, AI systems should be framed as decision-support tools, not diagnostic authorities, and their outputs should always be reviewed within the broader clinical context. Human-in-the-loop models, interpretability features, and rigorous validation in edge-case subtypes are necessary to reduce automation bias and reinforce the primacy of expert judgment in final diagnosis.

3.8. Workload, Time and Resources

Pathology has become a discipline where time, expertise, and technological demand converge, shaping not only diagnostic accuracy but also the rhythm and sustainability of clinical work. Time is both a technical variable and a critical factor. Ancillary investigations, such as immunohistochemistry (IHC) in doubtful cases or molecular analysis in complex ones, remain needful but are also demanding in terms of tissue use and time, inevitably leading to costs and delays [95]. They require expertise and a delicate balance between accuracy and sustainability, often becoming an organizational bottleneck [96].
The spread of WSI has expanded diagnostic possibilities but also the volume and complexity of image analysis [97], increasing cognitive load and the risk of omission. Digitalization, while improving storage and sharing, also introduces new forms of visual fatigue and distraction [98]. This scenario is intertwined with the chronic shortage of laboratory staff and reveals a paradox: pathology is increasingly central to precision medicine, yet threatened by overload and fatigue, with an increased risk of error [99]. In this context, digital and AI tools in dermatopathological diagnosis are not merely aids to efficiency but genuine “cognitive offloading devices” [100], automating repetitive yet sophisticated tasks and freeing the pathologist’s mental resources for interpretation and critical judgment [98]. AI is destined to become an integral part of pathology and if accompanied by robust validation, clear rules and attention to human factors, not only contributes to efficiency, but also to a real improvement in cognitive well-being, reducing the risk of errors linked to mental fatigue and overload [101]. In this context, AI applications function as cognitive offloading tools—systems that absorb repetitive, high-volume, or visually taxing tasks such as mitotic figure identification, ulceration segmentation, or nuclei counting—thereby freeing the pathologist’s mental bandwidth for complex diagnostic reasoning. This redistribution of cognitive effort can significantly improve diagnostic quality and reduce burnout, particularly in high-throughput academic or cancer center settings. However, it also raises ethical questions regarding task delegation, responsibility for diagnostic error, and the evolving scope of the pathologist’s expertise. Moreover, effective cognitive offloading requires trust in the system’s reliability and transparency. If AI tools are perceived as black boxes or provide inconsistent outputs, they may paradoxically increase cognitive burden by demanding additional verification steps. To mitigate this, AI systems must be explainable, auditable, and seamlessly integrated into diagnostic workflows, supporting rather than supplanting clinical judgment. Human–AI collaboration should be designed to maximize complementarity—leveraging the pattern recognition capacity of algorithms alongside the contextual insight and clinical accountability of the pathologist.
Ultimately, costs, time and resources are not just organizational variables, but reflections of the contemporary tension between scarcity of resources and data richness, calling for a renewed balance between human expertise and automation (Figure 2).
This schematic outlines the integration of digital pathology (DP) and artificial intelligence (AI) in the diagnostic workflow for cutaneous melanoma. The pipeline begins with whole slide imaging (WSI), where histopathologic slides are digitized at high resolution. Preprocessing steps include tiling, stain normalization, and quality control. Image tiles are then analyzed using convolutional neural networks (CNNs) and U-Net architectures for classification tasks (e.g., melanoma vs. nevus), semantic segmentation (e.g., Breslow thickness, ulceration), and feature extraction (e.g., mitotic figures).
Advanced modules incorporate spatial modeling of tumor-infiltrating lymphocytes (TILs) and nuclei-level morphometric analysis, providing interpretable insights into tumor architecture and immune contexture. Molecular prediction layers aim to infer genomic alterations (e.g., BRAF, NRAS mutations) and transcriptomic phenotypes directly from H&E slides. Outputs are synthesized into diagnostic and prognostic scores and can be integrated with clinical data in decision-support platforms and Molecular Tumor Boards (MTBs).
This workflow supports standardized diagnosis, risk stratification, and precision oncology approaches, while highlighting the importance of explainable AI, robust validation, and ethical integration into real-world pathology practice.

4. Discussion and Conclusions

The integration of AI and DP into the histopathologic diagnosis of cutaneous melanoma represents one of the most significant shifts in dermatopathology over the past decade. This review synthesizes evidence across five key domains—WSI-based classification, histopathologic feature extraction, spatial modeling, molecular prediction, and interpretable AI pipelines—highlighting the evolving role of computational tools in melanoma diagnostics. Our findings confirm that deep learning models, particularly CNNs, achieve diagnostic accuracy that is comparable to expert dermatopathologists, especially in differentiating melanoma from benign melanocytic lesions. However, despite high performance in retrospective datasets, most AI systems have not yet been prospectively validated in real-world, multi-institutional settings. Feature extraction tasks such as automated measurement of Breslow thickness, mitotic count, and ulceration detection demonstrate high promise for standardizing staging parameters and reducing interobserver variability. However, performance varies significantly depending on lesion quality, image resolution, and annotation fidelity. Spatial modeling, particularly for TILs, is emerging as a powerful tool to decode the tumor microenvironment and potentially predict immunotherapy outcomes. Furthermore, recent efforts to link histomorphology with underlying genomic alterations, such as BRAF mutation status or MAPK pathway activity, represent a key step toward “molecular histopathology”. Although still investigational, these AI-powered inferences from H&E images could one day serve as triage tools or complements to genomic sequencing. In parallel, the role of AI is expanding beyond diagnosis into clinical decision-making, particularly through its incorporation into MTBs. AI can assist in interpreting complex genomic variants, suggest targeted therapies, and even help stratify melanocytic lesions of uncertain malignant potential. This is especially valuable in the biologic gray zone of melanocytomas, MELTUMPs, and other borderline lesions where traditional criteria fail to provide diagnostic or therapeutic clarity. Multimodal AI systems that integrate dermoscopy, histopathology, molecular data, and clinical outcomes could redefine diagnostic paradigms and bring biologic meaning to morphologic ambiguity. Despite promising results, limitations persist: model generalizability across diverse populations and slide preparation protocols is not assured, and the opacity of many DL models hinders clinical trust and regulatory approval [31,32,35]. Types of technical variability include discrepancies in scanner resolution and image compression formats, variation in H&E staining intensity and hue, differences in tissue section thickness, and batch effects from slide preparation workflows. Human variability stems from pathologist annotation style, differing diagnostic thresholds for ambiguous melanocytic lesions, and inconsistent ROI marking across institutions. These factors affect the consistency of the “ground truth” and the reproducibility of model training and validation outcomes [48,49,50,51]. Interpretable models and explainable AI are increasingly emphasized to address these concerns, but robust external validation and standardized datasets are needed for clinical translation. This challenge has led to increasing demand for external validation frameworks that go beyond internal cross-validation. Robust external validation is now seen as essential for clinical adoption and should include geographically and demographically diverse datasets, inclusion of diagnostically ambiguous subtypes, and benchmarking against expert performance. Federated learning, ensemble models trained on multicenter data, and public challenges (e.g., CAMELYON, MIDOG) are helping shape consensus standards for minimal validation requirements. Without these, even high-performance models risk failing under real-world variability. Current consensus is that AI serves as a valuable adjunct to expert pathology, improving workflow efficiency and diagnostic reproducibility, but is not a replacement for human expertise [36,93,102,103]. Moreover, real-world deployment of AI-assisted digital pathology is constrained by non-trivial infrastructural requirements. The acquisition of WSI scanners, high-performance computing hardware, secure large-scale data storage, and interoperable digital pathology platforms demands substantial institutional investment. These barriers disproportionately affect smaller or non-academic centers, creating disparities in access to advanced computational diagnostics. As a result, even high-performing AI models may remain confined to well-resourced institutions unless cost-effective, scalable infrastructure solutions become more widely available.

Funding

This research received no external funding.

Acknowledgments

The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CMCutaneous melanoma
WSIwhole slide images
MLmachine learning
CNNconvolutional neural network
AIartificial intelligence
TILstumor-infiltrating lymphocytes
DPdigital pathology
AUCarea under the curve
H&Ehematoxylin and eosin
LDAlinear discriminant analysis
MTBMolecular Tumor Board
NGSnext-generation sequencing

References

  1. Shah, M.; Schur, N.; Rosenberg, A.; DeBusk, L.; Burshtein, J.; Zakria, D.; Rigel, D. Trends in Melanoma Incidence and Mortality. Dermatol. Clin. 2025, 43, 373–379. [Google Scholar] [CrossRef]
  2. De Giorgi, V.; Magnaterra, E.; Zuccaro, B.; Magi, S.; Magliulo, M.; Medri, M.; Mazzoni, L.; Venturi, F.; Silvestri, F.; Tomassini, G.M.; et al. Is Pediatric Melanoma Really That Different from Adult Melanoma? A Multicenter Epidemiological, Clinical and Dermoscopic Study. Cancers 2023, 15, 1835. [Google Scholar] [CrossRef]
  3. Broseghini, E.; Veronesi, G.; Gardini, A.; Venturi, F.; Scotti, B.; Vespi, L.; Marchese, P.V.; Melotti, B.; Comito, F.; Corti, B.; et al. Defining high-risk patients: Beyond the 8the AJCC melanoma staging system. Arch. Dermatol. Res. 2024, 317, 78. [Google Scholar] [CrossRef]
  4. Siegel, R.L.; Kratzer, T.B.; Giaquinto, A.N.; Sung, H.; Jemal, A. Cancer statistics, 2025. CA Cancer J. Clin. 2025, 75, 10–45. [Google Scholar] [CrossRef] [PubMed]
  5. De Giorgi, V.; Scarfì, F.; Gori, A.; Silvestri, F.; Trane, L.; Maida, P.; Venturi, F.; Covarelli, P. Short-term teledermoscopic monitoring of atypical melanocytic lesions in the early diagnosis of melanoma: Utility more apparent than real. J. Eur. Acad. Dermatol. Venereol. JEADV 2020, 34, e398–e399. [Google Scholar] [CrossRef] [PubMed]
  6. De Giorgi, V.; Silvestri, F.; Cecchi, G.; Venturi, F.; Zuccaro, B.; Perillo, G.; Cosso, F.; Maio, V.; Simi, S.; Antonini, P.; et al. Dermoscopy as a Tool for Identifying Potentially Metastatic Thin Melanoma: A Clinical—Dermoscopic and Histopathological Case—Control Study. Cancers 2024, 16, 1394. [Google Scholar] [CrossRef]
  7. Stephens, K.R.; Donica, W.R.F.; Philips, P.; McMasters, K.M.; Egger, M.E. Melanoma Deaths by Thickness: Most Melanoma Deaths Are Not Attributable to Thin Melanomas. J. Surg. Res. 2024, 301, 24–28. [Google Scholar] [CrossRef] [PubMed]
  8. Cazzato, G. Histopathological Diagnosis of Malignant Melanoma at the Dawn of 2023: Knowledge Gained and New Challenges. Dermatopathology 2023, 10, 91–92. [Google Scholar] [CrossRef]
  9. Ricci, C.; Dika, E.; Ambrosi, F.; Lambertini, M.; Veronesi, G.; Barbara, C. Cutaneous Melanomas: A Single Center Experience on the Usage of Immunohistochemistry Applied for the Diagnosis. Int. J. Mol. Sci. 2022, 23, 5911. [Google Scholar] [CrossRef]
  10. De Giorgi, V.; Maida, P.; Salvati, L.; Scarfì, F.; Trane, L.; Gori, A.; Silvestri, F.; Venturi, F.; Covarelli, P. Trauma and foreign bodies may favour the onset of melanoma metastases. Clin. Exp. Dermatol. 2020, 45, 619–621. [Google Scholar] [CrossRef]
  11. WHO Classification of Tumours Online. Available online: https://tumourclassification.iarc.who.int/welcome/# (accessed on 11 July 2025).
  12. Zhai, H.; Dika, E.; Goldovsky, M.; Maibach, H.I. Tape-stripping method in man: Comparison of evaporimetric methods. Skin Res. Technol. 2007, 13, 207–210. [Google Scholar] [CrossRef]
  13. De Giorgi, V.; Scarfì, F.; Silvestri, F.; Maida, P.; Venturi, F.; Trane, L.; Gori, A. Genital piercing: A warning for the risk of vulvar lichen sclerosus. Dermatol. Ther. 2021, 34, e14703. [Google Scholar] [CrossRef]
  14. Krieger, N.; Hiatt, R.A.; Sagebiel, R.W.; Clark, W.H.; Mihm, M.C. Inter-observer variability among pathologists’ evaluation of malignant melanoma: Effects upon an analytic study. J. Clin. Epidemiol. 1994, 47, 897–902. [Google Scholar] [CrossRef]
  15. De Giorgi, V.; Venturi, F.; Silvestri, F.; Trane, L.; Savarese, I.; Scarfì, F.; Cencetti, F.; Pecenco, S.; Tramontana, M.; Maio, V.; et al. Atypical Spitz tumours: An epidemiological, clinical and dermoscopic multicentre study with 16 years of follow-up. Clin. Exp. Dermatol. 2022, 47, 1464–1471. [Google Scholar] [CrossRef] [PubMed]
  16. Berger, D.M.S.; Wassenberg, R.M.; Jóźwiak, K.; van de Wiel, B.A.; Balm, A.J.M.; van den Berg, J.G.; Klop, W.M.C. Inter-observer variation in the histopathology reports of head and neck melanoma; a comparison between the seventh and eighth edition of the AJCC staging system. Eur. J. Surg. Oncol. 2019, 45, 235–241. [Google Scholar] [CrossRef]
  17. Mosquera-Zamudio, A.; Launet, L.; Tabatabaei, Z.; Parra-Medina, R.; Colomer, A.; Oliver Moll, J.; Monteagudo, C.; Janssen, E.; Naranjo, V. Deep Learning for Skin Melanocytic Tumors in Whole-Slide Images: A Systematic Review. Cancers 2022, 15, 42. [Google Scholar] [CrossRef] [PubMed]
  18. Zheng, T.; Chen, W.; Li, S.; Quan, H.; Zou, M.; Zheng, S.; Zhao, Y.; Gao, X.; Cui, X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput. Med. Imaging Graph. 2023, 108, 102275. [Google Scholar] [CrossRef]
  19. Yin, W.; Zhou, D.; Nie, R. DI-UNet: Dual-branch interactive U-Net for skin cancer image segmentation. J. Cancer Res. Clin. Oncol. 2023, 149, 15511–15524. [Google Scholar] [CrossRef] [PubMed]
  20. Van Dieren, L.; Amar, J.Z.; Geurs, N.; Quisenaerts, T.; Gillet, C.; Delforge, B.; D’heysselaer, L.D.C.; Thiessen, E.F.; Cetrulo, C.L.; Lellouch, A.G. Unveiling the power of convolutional neural networks in melanoma diagnosis. Eur. J. Dermatol. EJD 2023, 33, 495–505. [Google Scholar] [CrossRef]
  21. Pérez, E.; Reyes, O.; Ventura, S. Convolutional neural networks for the automatic diagnosis of melanoma: An extensive experimental study. Med. Image Anal. 2021, 67, 101858. [Google Scholar] [CrossRef]
  22. Haenssle, H.A.; Fink, C.; Schneiderbauer, R.; Toberer, F.; Buhl, T.; Blum, A.; Kalloo, A.; Hassen, A.B.H.; Thomas, L.; Enk, A.; et al. Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 2018, 29, 1836–1842. [Google Scholar] [CrossRef]
  23. Menzies, S.W.; Sinz, C.; Menzies, M.; Lo, S.N.; Yolland, W.; Lingohr, J.; Razmara, M.; Tschandl, P.; Guitera, P.; Scolyer, R.A.; et al. Comparison of humans versus mobile phone-powered artificial intelligence for the diagnosis and management of pigmented skin cancer in secondary care: A multicentre, prospective, diagnostic, clinical trial. Lancet Digit. Health 2023, 5, e679–e691. [Google Scholar] [CrossRef]
  24. Papachristou, P.; Söderholm, M.; Pallon, J.; Taloyan, M.; Polesie, S.; Paoli, J.; Anderson, C.D.; Falk, M. Evaluation of an artificial intelligence-based decision support for the detection of cutaneous melanoma in primary care: A prospective real-life clinical trial. Br. J. Dermatol. 2024, 191, 125–133. [Google Scholar] [CrossRef]
  25. Acs, B.; Rantalainen, M.; Hartman, J. Artificial intelligence as the next step towards precision pathology. J. Intern. Med. 2020, 288, 62–81. [Google Scholar] [CrossRef]
  26. Sahni, S.; Wang, B.; Wu, D.; Dhruba, S.R.; Nagy, M.; Patkar, S.; Ferreira, I.; Day, C.-P.; Wang, K.; Ruppin, E. A machine learning model reveals expansive downregulation of ligand-receptor interactions that enhance lymphocyte infiltration in melanoma with developed resistance to immune checkpoint blockade. Nat. Commun. 2024, 15, 8867. [Google Scholar] [CrossRef]
  27. Chou, M.; Illa-Bochaca, I.; Minxi, B.; Darvishian, F.; Johannet, P.; Moran, U.; Shapiro, R.L.; Berman, R.S.; Osman, I.; Jour, G.; et al. Optimization of an automated tumor-infiltrating lymphocyte algorithm for improved prognostication in primary melanoma. Mod. Pathol. 2021, 34, 562–571. [Google Scholar] [CrossRef] [PubMed]
  28. Harder, N.; Schönmeyer, R.; Nekolla, K.; Meier, A.; Brieu, N.; Vanegas, C.; Madonna, G.; Capone, M.; Botti, G.; Ascierto, P.A.; et al. Automatic discovery of image-based signatures for ipilimumab response prediction in malignant melanoma. Sci. Rep. 2019, 9, 7449. [Google Scholar] [CrossRef]
  29. Jartarkar, S.R.; Cockerell, C.J.; Patil, A.; Kassir, M.; Babaei, M.; Weidenthaler-Barth, B.; Grabbe, S.; Goldust, M. Artificial intelligence in Dermatopathology. J. Cosmet. Dermatol. 2023, 22, 1163–1167. [Google Scholar] [CrossRef]
  30. Gordon, E.R.; Trager, M.H.; Kontos, D.; Weng, C.; Geskin, L.J.; Dugdale, L.S.; Samie, F.H. Ethical considerations for artificial intelligence in dermatology: A scoping review. Br. J. Dermatol. 2024, 190, 789–797. [Google Scholar] [CrossRef] [PubMed]
  31. Cazzato, G.; Rongioletti, F. Artificial intelligence in dermatopathology: Updates, strengths, and challenges. Clin. Dermatol. 2024, 42, 437–442. [Google Scholar] [CrossRef] [PubMed]
  32. Virgens, G.S.; Teodoro, J.A.; Iarussi, E.; Rodrigues, T.; Amaral, D.T. Enhancing and advancements in deep learning for melanoma detection: A comprehensive review. Comput. Biol. Med. 2025, 194, 110533. [Google Scholar] [CrossRef]
  33. Clarke, E.L.; Wade, R.G.; Magee, D.; Newton-Bishop, J.; Treanor, D. Image analysis of cutaneous melanoma histology: A systematic review and meta-analysis. Sci. Rep. 2023, 13, 4774. [Google Scholar] [CrossRef]
  34. Grant, S.R.; Andrew, T.W.; Alvarez, E.V.; Huss, W.J.; Paragh, G. Diagnostic and Prognostic Deep Learning Applications for Histological Assessment of Cutaneous Melanoma. Cancers 2022, 14, 6231. [Google Scholar] [CrossRef]
  35. Sauter, D.; Lodde, G.; Nensa, F.; Schadendorf, D.; Livingstone, E.; Kukuk, M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput. Biol. Med. 2023, 163, 107083. [Google Scholar] [CrossRef] [PubMed]
  36. Yee, J.; Rosendahl, C.; Aoude, L.G. The role of artificial intelligence and convolutional neural networks in the management of melanoma: A clinical, pathological, and radiological perspective. Melanoma Res. 2024, 34, 96–104. [Google Scholar] [CrossRef] [PubMed]
  37. Naddeo, M.; Broseghini, E.; Venturi, F.; Vaccari, S.; Corti, B.; Lambertini, M.; Ricci, C.; Fontana, B.; Durante, G.; Pariali, M.; et al. Association of miR-146a-5p and miR-21-5p with Prognostic Features in Melanomas. Cancers 2024, 16, 1688. [Google Scholar] [CrossRef]
  38. Grossarth, S.; Mosley, D.; Madden, C.; Ike, J.; Smith, I.; Huo, Y.; Wheless, L. Recent Advances in Melanoma Diagnosis and Prognosis Using Machine Learning Methods. Curr. Oncol. Rep. 2023, 25, 635–645. [Google Scholar] [CrossRef] [PubMed]
  39. Querzoli, G.; Veronesi, G.; Corti, B.; Nottegar, A.; Dika, E. Basic Elements of Artificial Intelligence Tools in the Diagnosis of Cutaneous Melanoma. Crit. Rev. Oncog. 2023, 28, 37–41. [Google Scholar] [CrossRef]
  40. Ertürk Zararsız, G.; Yerlitaş Taştan, S.I.; Çelik Gürbulak, E.; Erakcaoğlu, A.; Yılmaz Işıkhan, S.; Demirbaş, A.; Ertaş, R.; Eroğlu, İ.; Korkmaz, S.; Elmas, Ö.F.; et al. Diagnosis melanoma with artificial intelligence systems: A meta-analysis study and systematic review. J. Eur. Acad. Dermatol. Venereol. JEADV 2025, 39, 1912–1922. [Google Scholar] [CrossRef]
  41. Karimzadhagh, S.; Ghodous, S.; Robati, R.M.; Abbaspour, E.; Goldust, M.; Zaresharifi, N.; Zaresharifi, S. Performance of Artificial Intelligence in Skin Cancer Detection: An Umbrella Review of Systematic Reviews and Meta-Analyses. Int. J. Dermatol. 2025, 1–17. [Google Scholar] [CrossRef]
  42. Hekler, A.; Utikal, J.S.; Enk, A.H.; Berking, C.; Klode, J.; Schadendorf, D.; Jansen, P.; Franklin, C.; Holland-Letz, T.; Krahl, D.; et al. Pathologist-level classification of histopathological melanoma images with deep neural networks. Eur. J. Cancer 2019, 115, 79–83. [Google Scholar] [CrossRef]
  43. Brinker, T.J.; Hekler, A.; Enk, A.H.; Berking, C.; Haferkamp, S.; Hauschild, A.; Weichenthal, M.; Klode, J.; Schadendorf, D.; Holland-Letz, T.; et al. Deep neural networks are superior to dermatologists in melanoma image classification. Eur. J. Cancer 2019, 119, 11–17. [Google Scholar] [CrossRef] [PubMed]
  44. Hekler, A.; Utikal, J.S.; Enk, A.H.; Solass, W.; Schmitt, M.; Klode, J.; Schadendorf, D.; Sondermann, W.; Franklin, C.; Bestvater, F.; et al. Deep learning outperformed 11 pathologists in the classification of histopathological melanoma images. Eur. J. Cancer 2019, 118, 91–96. [Google Scholar] [CrossRef] [PubMed]
  45. Haggenmüller, S.; Wies, C.; Abels, J.; Winterstein, J.T.; Heinlein, L.; Nogueira Garcia, C.; Utikal, J.S.; Wohlfeil, S.A.; Meier, F.; Hobelsberger, S.; et al. Discordance, accuracy and reproducibility study of pathologists’ diagnosis of melanoma and melanocytic tumors. Nat. Commun. 2025, 16, 789. [Google Scholar] [CrossRef]
  46. Dika, E.; Curti, N.; Giampieri, E.; Veronesi, G.; Misciali, C.; Ricci, C.; Castellani, G.; Patrizi, A.; Marcelli, E. Advantages of manual and automatic computer-aided compared to traditional histopathological diagnosis of melanoma: A pilot study. Pathol. Res. Pract. 2022, 237, 154014. [Google Scholar] [CrossRef]
  47. Haggenmüller, S.; Maron, R.C.; Hekler, A.; Utikal, J.S.; Barata, C.; Barnhill, R.L.; Beltraminelli, H.; Berking, C.; Betz-Stablein, B.; Blum, A.; et al. Skin cancer classification via convolutional neural networks: Systematic review of studies involving human experts. Eur. J. Cancer 2021, 156, 202–216. [Google Scholar] [CrossRef]
  48. Maron, R.C.; Schlager, J.G.; Haggenmüller, S.; von Kalle, C.; Utikal, J.S.; Meier, F.; Gellrich, F.F.; Hobelsberger, S.; Hauschild, A.; French, L.; et al. A benchmark for neural network robustness in skin cancer classification. Eur. J. Cancer 2021, 155, 191–199. [Google Scholar] [CrossRef]
  49. Maron, R.C.; Haggenmüller, S.; von Kalle, C.; Utikal, J.S.; Meier, F.; Gellrich, F.F.; Hauschild, A.; French, L.E.; Schlaak, M.; Ghoreschi, K.; et al. Robustness of convolutional neural networks in recognition of pigmented skin lesions. Eur. J. Cancer 2021, 145, 81–91. [Google Scholar] [CrossRef]
  50. Schmitt, M.; Maron, R.C.; Hekler, A.; Stenzinger, A.; Hauschild, A.; Weichenthal, M.; Tiemann, M.; Krahl, D.; Kutzner, H.; Utikal, J.S.; et al. Hidden Variables in Deep Learning Digital Pathology and Their Potential to Cause Batch Effects: Prediction Model Study. J. Med. Internet Res. 2021, 23, e23436. [Google Scholar] [CrossRef] [PubMed]
  51. Cho, S.I.; Navarrete-Dechent, C.; Daneshjou, R.; Cho, H.S.; Chang, S.E.; Kim, S.H.; Na, J.-I.; Han, S.S. Generation of a Melanoma and Nevus Data Set from Unstandardized Clinical Photographs on the Internet. JAMA Dermatol. 2023, 159, 1223–1231. [Google Scholar] [CrossRef]
  52. Haggenmüller, S.; Schmitt, M.; Krieghoff-Henning, E.; Hekler, A.; Maron, R.C.; Wies, C.; Utikal, J.S.; Meier, F.; Hobelsberger, S.; Gellrich, F.F.; et al. Federated Learning for Decentralized Artificial Intelligence in Melanoma Diagnostics. JAMA Dermatol. 2024, 160, 303–311. [Google Scholar] [CrossRef]
  53. Onega, T.; Barnhill, R.L.; Piepkorn, M.W.; Longton, G.M.; Elder, D.E.; Weinstock, M.A.; Knezevich, S.R.; Reisch, L.M.; Carney, P.A.; Nelson, H.D.; et al. Accuracy of Digital Pathologic Analysis vs Traditional Microscopy in the Interpretation of Melanocytic Lesions. JAMA Dermatol. 2018, 154, 1159–1166. [Google Scholar] [CrossRef] [PubMed]
  54. Curti, N.; Veronesi, G.; Dika, E.; Misciali, C.; Marcelli, E.; Giampieri, E. Breslow thickness: Geometric interpretation, potential pitfalls, and computer automated estimation. Pathol. Res. Pract. 2022, 238, 154117. [Google Scholar] [CrossRef]
  55. Hu, Y.; Sirinukunwattana, K.; Li, B.; Gaitskell, K.; Domingo, E.; Bonnaffé, W.; Wojciechowska, M.; Wood, R.; Alham, N.K.; Malacrino, S.; et al. Self-interactive learning: Fusion and evolution of multi-scale histomorphology features for molecular traits prediction in computational pathology. Med. Image Anal. 2025, 101, 103437. [Google Scholar] [CrossRef]
  56. Nofallah, S.; Mehta, S.; Mercan, E.; Knezevich, S.; May, C.J.; Weaver, D.; Witten, D.; Elmore, J.G.; Shapiro, L. Machine learning techniques for mitoses classification. Comput. Med. Imaging Graph. 2021, 87, 101832. [Google Scholar] [CrossRef] [PubMed]
  57. Papadakis, M.; Paschos, A.; Manios, A.; Lehmann, P.; Manios, G.; Zirngibl, H. Computer-aided clinical image analysis for non-invasive assessment of tumor thickness in cutaneous melanoma. BMC Res. Notes 2021, 14, 232. [Google Scholar] [CrossRef]
  58. Long, G.V.; Swetter, S.M.; Menzies, A.M.; Gershenwald, J.E.; Scolyer, R.A. Cutaneous melanoma. Lancet Lond. Engl. 2023, 402, 485–502. [Google Scholar] [CrossRef]
  59. Veronesi, G.; Curti, N.; Gardini, A.; Querzoli, G.; Castellani, G.; Dika, E. Machine learning to detect melanoma exploiting nuclei morphology and Spatial organization. Sci. Rep. 2025, 15, 21594. [Google Scholar] [CrossRef] [PubMed]
  60. Dika, E.; Riefolo, M.; Porcellini, E.; Broseghini, E.; Ribero, S.; Senetta, R.; Osella-Abate, S.; Scarfì, F.; Lambertini, M.; Veronesi, G.; et al. Defining the Prognostic Role of MicroRNAs in Cutaneous Melanoma. J. Investig. Dermatol. 2020, 140, 2260–2267. [Google Scholar] [CrossRef]
  61. Neimy, H.; Helmy, J.E.; Snyder, A.; Valdebran, M. Artificial Intelligence in Melanoma Dermatopathology: A Review of Literature. Am. J. Dermatopathol. 2024, 46, 83–94. [Google Scholar] [CrossRef]
  62. Aung, T.N.; Liu, M.; Su, D.; Shafi, S.; Boyaci, C.; Steen, S.; Tsiknakis, N.; Vidal, J.M.; Maher, N.; Micevic, G.; et al. Pathologist-Read vs AI-Driven Assessment of Tumor-Infiltrating Lymphocytes in Melanoma. JAMA Netw. Open 2025, 8, e2518906. [Google Scholar] [CrossRef]
  63. Chatziioannou, E.; Roßner, J.; Aung, T.N.; Rimm, D.L.; Niessner, H.; Keim, U.; Serna-Higuita, L.M.; Bonzheim, I.; Kuhn Cuellar, L.; Westphal, D.; et al. Deep learning-based scoring of tumour-infiltrating lymphocytes is prognostic in primary melanoma and predictive to PD-1 checkpoint inhibition in melanoma metastases. eBioMedicine 2023, 93, 104644. [Google Scholar] [CrossRef]
  64. Moore, M.R.; Friesner, I.D.; Rizk, E.M.; Fullerton, B.T.; Mondal, M.; Trager, M.H.; Mendelson, K.; Chikeka, I.; Kurc, T.; Gupta, R.; et al. Automated digital TIL analysis (ADTA) adds prognostic value to standard assessment of depth and ulceration in primary melanoma. Sci. Rep. 2021, 11, 2809. [Google Scholar] [CrossRef]
  65. Ugolini, F.; De Logu, F.; Iannone, L.F.; Brutti, F.; Simi, S.; Maio, V.; de Giorgi, V.; Maria di Giacomo, A.; Miracco, C.; Federico, F.; et al. Tumor-Infiltrating Lymphocyte Recognition in Primary Melanoma by Deep Learning Convolutional Neural Network. Am. J. Pathol. 2023, 193, 2099–2110. [Google Scholar] [CrossRef]
  66. Lapuente-Santana, Ó.; Kant, J.; Eduati, F. Integrating histopathology and transcriptomics for spatial tumor microenvironment profiling in a melanoma case study. npj Precis. Oncol. 2024, 8, 254. [Google Scholar] [CrossRef]
  67. Van Herck, Y.; Antoranz, A.; Andhari, M.D.; Milli, G.; Bechter, O.; De Smet, F.; Bosisio, F.M. Multiplexed Immunohistochemistry and Digital Pathology as the Foundation for Next-Generation Pathology in Melanoma: Methodological Comparison and Future Clinical Applications. Front. Oncol. 2021, 11, 636681. [Google Scholar] [CrossRef]
  68. Kurz, A.; Krahl, D.; Kutzner, H.; Barnhill, R.; Perasole, A.; Figueras, M.T.F.; Ferrara, G.; Braun, S.A.; Starz, H.; Llamas-Velasco, M.; et al. A 3-dimensional histology computer model of malignant melanoma and its implications for digital pathology. Eur. J. Cancer 2023, 193, 113294. [Google Scholar] [CrossRef] [PubMed]
  69. Dika, E.; Veronesi, G.; Altimari, A.; Riefolo, M.; Ravaioli, G.M.; Piraccini, B.M.; Lambertini, M.; Campione, E.; Gruppioni, E.; Fiorentino, M.; et al. BRAF, KIT, and NRAS Mutations of Acral Melanoma in White Patients. Am. J. Clin. Pathol. 2020, 153, 664–671. [Google Scholar] [CrossRef] [PubMed]
  70. Stenzinger, A.; Alber, M.; Allgäuer, M.; Jurmeister, P.; Bockmayr, M.; Budczies, J.; Lennerz, J.; Eschrich, J.; Kazdal, D.; Schirmacher, P.; et al. Artificial intelligence and pathology: From principles to practice and future applications in histomorphology and molecular profiling. Semin. Cancer Biol. 2022, 84, 129–143. [Google Scholar] [CrossRef] [PubMed]
  71. Maloberti, T.; De Leo, A.; Coluccelli, S.; Sanza, V.; Gruppioni, E.; Altimari, A.; Comito, F.; Melotti, B.; Marchese, P.V.; Dika, E.; et al. Molecular Characterization of Advanced-Stage Melanomas in Clinical Practice Using a Laboratory-Developed Next-Generation Sequencing Panel. Diagnostics 2024, 14, 800. [Google Scholar] [CrossRef]
  72. Broseghini, E.; Venturi, F.; Veronesi, G.; Scotti, B.; Migliori, M.; Marini, D.; Ricci, C.; Casadei, R.; Ferracin, M.; Dika, E. Exploring the Common Mutational Landscape in Cutaneous Melanoma and Pancreatic Cancer. Pigment Cell Melanoma Res. 2025, 38, e13210. [Google Scholar] [CrossRef]
  73. Albahri, M.; Sauter, D.; Nensa, F.; Lodde, G.; Livingstone, E.; Schadendorf, D.; Kukuk, M. A new approach combining a whole-slide foundation model and gradient boosting for predicting BRAF mutation status in dermatopathology. Comput. Struct. Biotechnol. J. 2025, 27, 2503–2514. [Google Scholar] [CrossRef]
  74. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
  75. Kim, R.H.; Nomikou, S.; Coudray, N.; Jour, G.; Dawood, Z.; Hong, R.; Esteva, E.; Sakellaropoulos, T.; Donnelly, D.; Moran, U.; et al. Deep Learning and Pathomics Analyses Reveal Cell Nuclei as Important Features for Mutation Prediction of BRAF-Mutated Melanomas. J. Investig. Dermatol. 2022, 142, 1650–1658.e6. [Google Scholar] [CrossRef]
  76. Schneider, L.; Wies, C.; Krieghoff-Henning, E.I.; Bucher, T.-C.; Utikal, J.S.; Schadendorf, D.; Brinker, T.J. Multimodal integration of image, epigenetic and clinical data to predict BRAF mutation status in melanoma. Eur. J. Cancer 2023, 183, 131–138. [Google Scholar] [CrossRef] [PubMed]
  77. Failmezger, H.; Muralidhar, S.; Rullan, A.; de Andrea, C.E.; Sahai, E.; Yuan, Y. Topological Tumor Graphs: A Graph-Based Spatial Model to Infer Stromal Recruitment for Immunosuppression in Melanoma Histology. Cancer Res. 2020, 80, 1199–1209. [Google Scholar] [CrossRef] [PubMed]
  78. Hamamoto, R.; Koyama, T.; Kouno, N.; Yasuda, T.; Yui, S.; Sudo, K.; Hirata, M.; Sunami, K.; Kubo, T.; Takasawa, K.; et al. Introducing AI to the molecular tumor board: One direction toward the establishment of precision medicine using large-scale cancer clinical and biological information. Exp. Hematol. Oncol. 2022, 11, 82. [Google Scholar] [CrossRef]
  79. Morash, M.; Mitchell, H.; Beltran, H.; Elemento, O.; Pathak, J. The Role of Next-Generation Sequencing in Precision Medicine: A Review of Outcomes in Oncology. J. Pers. Med. 2018, 8, 30. [Google Scholar] [CrossRef] [PubMed]
  80. Hamamoto, R.; Suvarna, K.; Yamada, M.; Kobayashi, K.; Shinkai, N.; Miyake, M.; Takahashi, M.; Jinnai, S.; Shimoyama, R.; Sakai, A.; et al. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers 2020, 12, 3532. [Google Scholar] [CrossRef]
  81. Diaz-Ramón, J.L.; Gardeazabal, J.; Izu, R.M.; Garrote, E.; Rasero, J.; Apraiz, A.; Penas, C.; Seijo, S.; Lopez-Saratxaga, C.; De la Peña, P.M.; et al. Melanoma Clinical Decision Support System: An Artificial Intelligence-Based Tool to Diagnose and Predict Disease Outcome in Early-Stage Melanoma Patients. Cancers 2023, 15, 2174. [Google Scholar] [CrossRef]
  82. Song, Q.; Li, M.; Li, Q.; Lu, X.; Song, K.; Zhang, Z.; Wei, J.; Zhang, L.; Wei, J.; Ye, Y.; et al. DeepAlloDriver: A deep learning-based strategy to predict cancer driver mutations. Nucleic Acids Res. 2023, 51, W129–W133. [Google Scholar] [CrossRef]
  83. Nardone, V.; Marmorino, F.; Germani, M.M.; Cichowska-Cwalińska, N.; Menditti, V.S.; Gallo, P.; Studiale, V.; Taravella, A.; Landi, M.; Reginelli, A.; et al. The Role of Artificial Intelligence on Tumor Boards: Perspectives from Surgeons, Medical Oncologists and Radiation Oncologists. Curr. Oncol. 2024, 31, 4984–5007. [Google Scholar] [CrossRef]
  84. Tiwari, A.; Mishra, S.; Kuo, T.-R. Current AI technologies in cancer diagnostics and treatment. Mol. Cancer 2025, 24, 159. [Google Scholar] [CrossRef]
  85. Annessi, G.; Annessi, E. Considerations on The Biologic Gray Zone of Melanocytic Tumors. Dermatol. Pract. Concept. 2024, 14, e2024154. [Google Scholar] [CrossRef]
  86. Ferrara, G.; Gualandi, A.; Rizzo, N. Biologic Gray Zone of Melanocytic Tumors in Reality: Defining “Non-Conventional” Melanocytic Tumors. Dermatol. Pract. Concept. 2024, 14, e2024149. [Google Scholar] [CrossRef]
  87. Dika, E.; Lambertini, M.; Venturi, F.; Veronesi, G.; Mastroeni, S.; Hrvatin Stancic, B.; Bergant-Suhodolcan, A.; Fortes, C. A Comparative Demographic Study of Atypical Spitz Nevi and Malignant Melanoma. Acta Dermatovenerol. Croat. ADC 2023, 31, 165–168. [Google Scholar]
  88. Kalidindi, S. The Role of Artificial Intelligence in the Diagnosis of Melanoma. Cureus 2024, 16, e69818. [Google Scholar] [CrossRef] [PubMed]
  89. Brancaccio, G.; Balato, A.; Malvehy, J.; Puig, S.; Argenziano, G.; Kittler, H. Artificial Intelligence in Skin Cancer Diagnosis: A Reality Check. J. Investig. Dermatol. 2024, 144, 492–499. [Google Scholar] [CrossRef] [PubMed]
  90. Kahraman, F.; Aktas, A.; Bayrakceken, S.; Çakar, T.; Tarcan, H.S.; Bayram, B.; Durak, B.; Ulman, Y.I. Physicians’ ethical concerns about artificial intelligence in medicine: A qualitative study: “The final decision should rest with a human”. Front. Public Health 2024, 12, 1428396. [Google Scholar] [CrossRef]
  91. Abgrall, G.; Holder, A.L.; Chelly Dagdia, Z.; Zeitouni, K.; Monnet, X. Should AI models be explainable to clinicians? Crit. Care 2024, 28, 301. [Google Scholar] [CrossRef] [PubMed]
  92. Jairath, N.; Pahalyants, V.; Shah, R.; Weed, J.; Carucci, J.A.; Criscito, M.C. Artificial Intelligence in Dermatology: A Systematic Review of Its Applications in Melanoma and Keratinocyte Carcinoma Diagnosis. Dermatol. Surg. 2024, 50, 791–798. [Google Scholar] [CrossRef]
  93. Chen, S.B.; Novoa, R.A. Artificial intelligence for dermatopathology: Current trends and the road ahead. Semin. Diagn. Pathol. 2022, 39, 298–304. [Google Scholar] [CrossRef] [PubMed]
  94. Liu, Y.; Primiero, C.A.; Kulkarni, V.; Soyer, H.P.; Betz-Stablein, B. Artificial Intelligence for the Classification of Pigmented Skin Lesions in Populations with Skin of Color: A Systematic Review. Dermatology 2023, 239, 499–513. [Google Scholar] [CrossRef] [PubMed]
  95. Hosler, G.A.; Murphy, K.M. Ancillary testing for melanoma: Current trends and practical considerations. Hum. Pathol. 2023, 140, 5–21. [Google Scholar] [CrossRef] [PubMed]
  96. Herrero Colomina, J.; Johnston, E.; Duffus, K.; Zaïr, Z.M.; Thistlethwaite, F.; Krebs, M.; Carter, L.; Graham, D.; Cook, N. Real-world experience of Molecular Tumour Boards for clinical decision-making for cancer patients. npj Precis. Oncol. 2025, 9, 87. [Google Scholar] [CrossRef]
  97. Tran, M.; Schmidle, P.; Guo, R.R.; Wagner, S.J.; Koch, V.; Lupperger, V.; Novotny, B.; Murphree, D.H.; Hardway, H.D.; D’Amato, M.; et al. Generating dermatopathology reports from gigapixel whole slide images with HistoGPT. Nat. Commun. 2025, 16, 4886. [Google Scholar] [CrossRef]
  98. Schwen, L.O.; Kiehl, T.-R.; Carvalho, R.; Zerbe, N.; Homeyer, A. Digitization of Pathology Labs: A Review of Lessons Learned. Lab. Investig. 2023, 103, 100244. [Google Scholar] [CrossRef]
  99. Walsh, E.; Orsi, N.M. The current troubled state of the global pathology workforce: A concise review. Diagn. Pathol. 2024, 19, 163. [Google Scholar] [CrossRef]
  100. Risko, E.F.; Gilbert, S.J. Cognitive Offloading. Trends Cogn. Sci. 2016, 20, 676–688. [Google Scholar] [CrossRef]
  101. Nakagawa, K.; Moukheiber, L.; Celi, L.A.; Patel, M.; Mahmood, F.; Gondim, D.; Hogarth, M.; Levenson, R. AI in Pathology: What could possibly go wrong? Semin. Diagn. Pathol. 2023, 40, 100–108. [Google Scholar] [CrossRef]
  102. Stiff, K.M.; Franklin, M.J.; Zhou, Y.; Madabhushi, A.; Knackstedt, T.J. Artificial intelligence and melanoma: A comprehensive review of clinical, dermoscopic, and histologic applications. Pigment Cell Melanoma Res. 2022, 35, 203–211. [Google Scholar] [CrossRef] [PubMed]
  103. Wells, A.; Patel, S.; Lee, J.B.; Motaparthi, K. Artificial intelligence in dermatopathology: Diagnosis, education, and research. J. Cutan. Pathol. 2021, 48, 1061–1068. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flow diagram illustrating the study selection process for this narrative review. A total of 412 records were identified through database searches (PubMed, Scopus, Web of Science). After removal of duplicates, 387 titles and abstracts were screened. Of these, 265 were excluded based on relevance or study type. A total of 122 full-text articles were assessed for eligibility, and 35 were excluded (e.g., not focused on melanoma, lacking histologic input, or missing performance metrics). Ultimately, 87 studies met the inclusion criteria and were included in the qualitative synthesis.
Figure 1. The flow diagram illustrating the study selection process for this narrative review. A total of 412 records were identified through database searches (PubMed, Scopus, Web of Science). After removal of duplicates, 387 titles and abstracts were screened. Of these, 265 were excluded based on relevance or study type. A total of 122 full-text articles were assessed for eligibility, and 35 were excluded (e.g., not focused on melanoma, lacking histologic input, or missing performance metrics). Ultimately, 87 studies met the inclusion criteria and were included in the qualitative synthesis.
Cancers 17 03696 g001
Figure 2. Comprehensive AI workflow for histopathologic melanoma diagnosis.
Figure 2. Comprehensive AI workflow for histopathologic melanoma diagnosis.
Cancers 17 03696 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Venturi, F.; Veronesi, G.; Gualandi, A.; Magnaterra, E.; Scotti, B.; Sotiri, I.; Baraldi, C.; Alessandrini, A.M.; Veneziano, L.; Vaccari, S.; et al. From Slide to Insight: The Emerging Alliance of Digital Pathology and AI in Melanoma Diagnostics. Cancers 2025, 17, 3696. https://doi.org/10.3390/cancers17223696

AMA Style

Venturi F, Veronesi G, Gualandi A, Magnaterra E, Scotti B, Sotiri I, Baraldi C, Alessandrini AM, Veneziano L, Vaccari S, et al. From Slide to Insight: The Emerging Alliance of Digital Pathology and AI in Melanoma Diagnostics. Cancers. 2025; 17(22):3696. https://doi.org/10.3390/cancers17223696

Chicago/Turabian Style

Venturi, Federico, Giulia Veronesi, Alberto Gualandi, Elisabetta Magnaterra, Biagio Scotti, Ina Sotiri, Carlotta Baraldi, Aurora Maria Alessandrini, Leonardo Veneziano, Sabina Vaccari, and et al. 2025. "From Slide to Insight: The Emerging Alliance of Digital Pathology and AI in Melanoma Diagnostics" Cancers 17, no. 22: 3696. https://doi.org/10.3390/cancers17223696

APA Style

Venturi, F., Veronesi, G., Gualandi, A., Magnaterra, E., Scotti, B., Sotiri, I., Baraldi, C., Alessandrini, A. M., Veneziano, L., Vaccari, S., Cama, E. M., Tassone, D., Corti, B., & Dika, E. (2025). From Slide to Insight: The Emerging Alliance of Digital Pathology and AI in Melanoma Diagnostics. Cancers, 17(22), 3696. https://doi.org/10.3390/cancers17223696

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop