Next Article in Journal
Automated Selection of Time Series Forecasting Models for Financial Accounting Data: Synthetic Data Application
Next Article in Special Issue
YOLO-CBF: Optimized YOLOv7 Algorithm for Helmet Detection in Road Environments
Previous Article in Journal
The Detection of Spurious Correlations in Public Bidding and Contract Descriptions Using Explainable Artificial Intelligence and Unsupervised Learning
Previous Article in Special Issue
A Study of Occluded Person Re-Identification for Shared Feature Fusion with Pose-Guided and Unsupervised Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HistoMoCo: Momentum Contrastive Learning Pre-Training on Unlabeled Histopathological Images for Oral Squamous Cell Carcinoma Detection

1
School of Computer Science, Peking University, Beijing 100871, China
2
Key Laboratory of High Confidence Software Technologies, Ministry of Education, Peking University, Beijing 100871, China
3
School and Hospital of Stomatology, Peking University, Beijing 100871, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2025, 14(7), 1252; https://doi.org/10.3390/electronics14071252
Submission received: 5 February 2025 / Revised: 12 March 2025 / Accepted: 13 March 2025 / Published: 22 March 2025

Abstract

:
The early detection and intervention of oral squamous cell carcinoma (OSCC) using histopathological images are crucial for improving patient outcomes. The current literature for identifying OSCC predominantly relies on models pre-trained on ImageNet to minimize the need for manual data annotations in model fine-tuning. However, a significant data divergence exists between visual domains of natural images in ImageNet and histopathological images, potentially limiting the representation and transferability of these models. Inspired by recent self-supervised research, in this work, we propose HistoMoCo, an adaptation of Momentum Contrastive Learning (MoCo), designed to generate models with enhanced image representations and initializations for OSCC detection in histopathological images. Specifically, HistoMoCo aggregates 102,228 histopathological images and leverages the structure and features unique to histological data, allowing for more robust feature extraction and subsequent downstream fine-tuning. We perform OSCC detection tasks to evaluate HistoMoCo on two real-world histopathological image datasets, including NDB-UFES and Oral Histopathology datasets. Experimental results demonstrate that HistoMoCo consistently outperforms traditional ImageNet-based pre-training, yielding more stable and accurate performance in OSCC detection, achieving AUROC results up to 99.4% on the NDB-UFES dataset and 94.8% on the Oral Histopathology dataset. Furthermore, on the NDB-UFES dataset, the ImageNet-based pre-training solution achieves an AUROC of 89.32% using 40% of the training data, whereas HistoMoCo reaches an AUROC of 89.58% using only 10% of the training data. HistoMoCo addresses the issue of domain divergence between natural images and histopathological images, achieving state-of-the-art performance in two OSCC detection datasets. More importantly, HistoMoCo significantly reduces the reliance on manual annotations in the training dataset. We release our code and pre-trained parameters for further research in histopathology or OSCC detection tasks.

1. Introduction

The increasing incidence of cancer globally underscores the urgent need for effective screening and diagnostic strategies to enable timely detection and treatment. Among various types of cancer, oral squamous cell carcinoma (OSCC) has emerged as a significant health concern [1], particularly affecting regions with high prevalence rates, such as Asia. According to recent reports, OSCC ranks 16th in terms of global incidence and mortality, with Asia accounting for approximately 65.8% of cases and a striking death rate of 74.0% [1]. Early detection and intervention are crucial for improving outcomes, as they substantially enhance the chances of complete remission. Despite advancements in awareness programs and clinical protocols, the effective screening of OSCC remains challenging, emphasizing the importance of developing robust diagnostic tools. These tools are particularly needed to address the significant variations in clinical presentations and the impact of risk factors like tobacco use [2]. Therefore, enhancing OSCC detection methodologies is vital to reduce the disease burden and improve patient survival rates.
In recent years, deep learning-based solutions [3,4] have exhibited state-of-the-art performance in extracting valuable insights from histopathological images [5,6], offering promising avenues to address these challenges. An AI-assisted workflow could potentially serve as a “second reader”, pre-screening normal samples to reduce pathologists’ workload, automatically highlighting suspicious regions to guide attention, and providing remote preliminary screening for underserved areas. Such systems could significantly reduce analysis time while improving diagnostic consistency and accessibility. However, developing effective AI systems requires an extremely large number of images with fine annotations to train deep neural networks (DNNs) and deliver decent performance in a supervised learning manner [7,8,9]. Unfortunately, large-scale annotated OSCC histopathological datasets remain scarce due to multiple factors: (1) stringent patient privacy regulations and ethical constraints limiting data sharing, (2) the prohibitive cost of expert annotation, with each image requiring meticulous labeling by specialists, and (3) heterogeneity in sample processing and imaging protocols across institutions, resulting in technical variability that complicates dataset standardization.
Most state-of-the-art models [10,11,12] used for histological image analysis are pre-trained on ImageNet due to the lack of large-scale annotated histopathological datasets. Although this approach has proven effective, the domain divergence between natural images in ImageNet and histopathological images may hinder the model’s ability to generalize effectively to medical image tasks. As shown in Figure 1, there is significant distributional divergence in Contrast, Homogeneity, Energy, and Correlation, between natural and histopathological images, posing challenges for model fine-tuning. Contrast measures local variations in pixel intensity within an image, mathematically represented as intensity differences between adjacent pixels in the Gray Level Co-occurrence Matrix (GLCM). High contrast values indicate significant intensity variations in tissue structures, typically reflecting abnormal cell density and tissue structure disruption in OSCC. Homogeneity measures the uniformity of image texture by evaluating the similarity of pixel pairs. Normal tissues typically exhibit higher homogeneity, while tumor tissues show lower homogeneity due to cellular atypia and irregular arrangements. Energy measures the orderliness and uniformity of texture, calculated as the sum of squared GLCM elements. High energy values indicate uniform texture or numerous repetitive patterns, typically higher in normal tissues than in tumor tissues. Correlation is a measure of linear dependencies in the GLCM, representing the degree of correlation between adjacent pixels. Correlation can help distinguish between normal organized structures (high correlation) and the disordered arrangement of tumor tissues (low correlation). As a result, there is a growing demand for pre-training strategies that are specifically designed to address the unique characteristics of histopathological images.
To reduce the required sample size and enable end-to-end adaptation to the target domain, recent approaches have proposed the self-supervised learning (SSL) paradigm, which enhances the performance of DNN models by directly learning visual features from target domain images, without the need for labels. Among the different self-supervised learning methods, contrastive learning algorithms [13,14,15] utilize a similarity-based metric to measure the distance between embeddings derived from different views of the same image. These views are generated through data augmentation techniques such as rotation, cropping, and shifting, while the embeddings are extracted from a DNN with learnable parameters. Specifically, in computer vision tasks, the contrastive loss is specifically computed using feature representations from the encoder network, which aids in clustering similar samples while separating dissimilar ones. Recent methods such as SwAV [16], SimCLR [13], MoCo [17], and PILR [18] have shown superior performance over traditional supervised learning methods on natural images.
Contrastive learning was initially proposed for natural images and has demonstrated promising results. In recent years, it has been gradually applied to various modalities of medical images, such as X-rays [19,20,21], MRI [22,23,24], and cellular images [25,26,27], proving its effectiveness.
In recent years, self-supervised learning methods have made significant progress in histopathological image analysis, with numerous innovative studies emerging [28,29,30,31,32,33]. These methods encompass various paradigms including contrastive learning, masked image modeling, and self-distillation, providing effective approaches to address the scarcity of annotated data. Our study aims to explore the applicability and optimization strategies of the MoCo framework in the specific domain of oral squamous cell carcinoma, complementing existing research.
In this work, we aim to extract more domain-specific features from histopathological images in an unsupervised manner, enhancing the model’s performance on downstream OSCC detection tasks. Our method involves pre-training a deep learning model using a self-supervised objective on a large set of histopathological images before fine-tuning on a smaller, labeled OSCC dataset.
The main contributions of this work are threefold:
1.
To address the distributional divergence between histopathological images and natural images, we developed HistoMoCo, a custom pre-training framework tailored for histopathological images. HistoMoCo aims to generate enhanced image representations and initialize models for OSCC detection in histopathological images.
2.
We provide comprehensive evaluations and analyses, highlighting the benefits of self-supervised learning for histopathological image analysis; demonstrating the proposed HistoMoCo provides high-quality representations and transferable initializations for histopathological images interpretation and OSCC detection.
3.
We release our code and pre-trained parameters for further research in histopathology or OSCC detection tasks.
The remainder of this manuscript is organized as follows: Section 2 reviews related work on self-supervised learning and the application of MoCo in medical imaging. Section 3 provides the necessary background on MoCo. Section 4 presents detailed specifications of our method. Section 5 outlines our experimental methodology and detailed experimental results, while Section 6 concludes the work.

2. Related Work

2.1. Detection of Oral Squamous Cell Carcinoma

Oral squamous cell carcinoma (OSCC) detection increasingly utilizes deep learning, particularly for pre-trained models based on large datasets like ImageNet. Several studies have demonstrated the effectiveness of convolutional neural networks (CNNs) in analyzing histopathology images for OSCC diagnosis. Redie et al. [12] evaluated ten pre-trained CNN models, finding that VGG-19 achieved a classification accuracy of 96.26% when combined with data augmentation. Kavyashree et al. [11] compared DenseNet architectures, reporting that DenseNet-201 reached an accuracy of 85.00%, while DenseNet-169 achieved a training accuracy of 98.96%. Mohan et al. [1] proposed OralNet, which utilized a four-stage approach for OSCC detection, achieving over 99.50% accuracy with histopathology images. These findings highlight the potential of pre-trained models for feature extraction and classification. However, most existing work [6,11,12] relies on models pre-trained on ImageNet, which may not fully capture the unique characteristics of histopathological images. Thus, there is a need for specialized pre-training methods tailored to histopathology to improve OSCC detection accuracy and enhance clinical outcomes.

2.2. Application of MoCo in Medical Image Analysis

Contrastive learning is one of the mainstream paradigms of self-supervised learning (SSL), aiming to learn consistent representations through the comparison of positive and negative sample pairs without requiring additional annotations. Given the tremendous potential of label-efficient learning [17,34], SSL has garnered significant attention in the field of medical image analysis [19,20,21,22,23,25,26]. Specifically, Liao et al. [20,21] integrated multi-task learning into the self-supervised paradigm of MoCo to enhance the model’s representational ability on X-ray images. Chaitanya et al. [22] proposed a strategy to extend the contrastive learning framework for segmenting 3D MRI images in semi-supervised settings by leveraging domain-specific and problem-specific prompts. Cao et al. [35] applied MoCo to enhance the performance of deep learning models in detecting rib fractures on CT images.
Recent advances have seen the emergence of foundation models for computational pathology. Chen et al. [30] introduced UNI, a general-purpose self-supervised model trained on over 100 million images from 100,000+ whole-slide images across 20 tissue types, demonstrating superior performance on 34 computational pathology tasks. Vorontsov et al. [32] presented Virchow, a 632-million parameter vision transformer model trained using DINOv2 on 1.5 million H&E stained slides, achieving state-of-the-art performance on pan-cancer detection and various benchmarks. Lu et al. [31] developed CONCH, a visual-language foundation model trained on diverse histopathology images and over 1.17 million image-caption pairs, excelling in tasks including classification, segmentation, and retrieval. Xiang et al. [33] introduced MUSK, a vision-language foundation model pretrained on 50 million pathology images and one billion text tokens using unified masked modeling, demonstrating strong performance across 23 benchmarks and various outcome prediction tasks. Our study aims to explore the applicability and optimization strategies of the MoCo framework in the specific domain of oral squamous cell carcinoma, complementing existing research.

3. Preliminaries

3.1. Contrastive Learning

Contrastive learning enables the self-supervised learning of image representations. Given a dataset D , the objective is to find a mapping function F : R d R a where a d , that satisfies the following condition:
s F ( x ) , F x + s F ( x ) , F x
where s ( · , · ) measures image similarity, while F is responsible for both representation learning and dimensionality reduction. In this context, positive and negative samples are denoted as x + and x , respectively, where x + is similar to x and x is dissimilar. The model learns representations by maximizing the agreement between different augmented views x q and x k of the same example x, utilizing a contrastive loss in the latent space. These augmented views x q and x k are generated through data augmentation techniques denoted as DA .

3.2. Momentum Contrastive (MoCo) Learning

Momentum Contrastive (MoCo) is a state-of-the-art contrastive learning method that surpasses supervised pre-training on various downstream tasks. Unlike other contrastive learning approaches, MoCo does not require a large batch size or a memory bank. Instead, it utilizes a memory queue to store a set of previously computed representations. Furthermore, MoCo introduces a momentum encoder, which will be elaborated on shortly.

3.2.1. Dictionary as a Queue

MoCo trains an encoder to perform a dictionary lookup task, where a query q and encoded samples x 1 , , x N act as the keys in the dictionary. A match occurs when the query q is similar to the positive sample x + , while there is no match for the negative samples x . Specifically, MoCo utilizes two visual encoders, labeled f q and f k , to learn query representations q = f q ( x q ) and key representations k = f k ( x k ) . Here, x q denotes the query sample, and x k refers to the key sample. To allow the encoder to reuse previously encoded samples, MoCo employs the dictionary as a queue. The pre-trained model is trained using the following loss function:
L = log exp q , k + / τ exp q , k + / τ + k exp q , k / τ
where τ is a temperature hyperparameter, and k + and k denote representations of positive and negative samples, respectively.

3.2.2. Momentum Update

MoCo employs a momentum update strategy for the parameters of the visual encoders. Let θ q represent the parameters of f q and θ k those of f k . The parameters θ q are updated through back-propagation using the contrastive loss L from Equation (2), and the parameters θ k are updated using the following equation:
θ k m θ k + ( 1 m ) θ q
where m [ 0 , 1 ) is a momentum coefficient. This updating strategy ensures a smoother evolution of θ k compared to θ q [17].

4. Methods

4.1. Histopathological Datasets and OSCC Detection Tasks

In this study, we utilize four publicly available histopathological image datasets, including NCT-CRC-HE-100K [36], EBHI-Seg [37], NDB-UFES [38], and Oral Histopathology [39].
1.
NCT-CRC-HE-100K [36] is a large pathology dataset comprising 100,000 H&E-stained histological images of human colorectal cancer and healthy tissues, extracted from 86 patients. It covers nine tissue types, with each image sized at 224 × 224 pixels.
2.
EBHI-Seg [37] is a dataset for segmentation tasks, containing 2228 original H&E images and corresponding ground truth annotations, with each original H&E image also having a resolution of 224 × 224 pixels.
3.
NDB-UFES [38] dataset (OSCC detection dataset) presents a total of 237 samples with histopathological images and sociodemographic and clinical data. Its subset comprises 3763 image patches for downstream classification tasks, with each patch classified as either OSCC or Normal, and sized at 512 × 512 pixels.
4.
Oral Histopathology [39] (OSCC detection dataset) is composed of histopathological images of the normal epithelium of the oral cavity and images of OSCC. It includes two different magnifications, 100× and 400×, with a total of 290 samples classified as normal and 934 samples classified as OSCC, all with image sizes of 2048 × 1536.

4.2. HistoMoCo Pre-Training for Histopathological Interpretation

We applied MoCo pre-training to histopathological images. Figure 2a shows how data augmentation is used to generate views of a specific image, followed by contrastive learning to learn embeddings in an unsupervised manner.
We chose to use MoCo due to the limitations of large-scale computational resource costs in medical imaging AI. In comparison to other self-supervised frameworks such as SimCLR [13], MoCo requires significantly smaller batch sizes during pre-training [40]. The MoCo implementation used a batch size of 256 and achieved performance similar to SimCLR on ImageNet, while SimCLR struggled with a batch size of 256 [40]. MoCo mitigates the reliance on large batch sizes by utilizing a momentum-updated queue of previous samples to generate contrastive pair encodings.
We used 80% of the images from NCT-CRC-HE-100K [36] and the entire EBHI-Seg [37] datasets for MoCo pre-training. We chose to apply MoCo pre-training on models with ImageNet to leverage potential convergence advantages [41]. Due to the widespread availability of ImageNet pre-trained weights, initializing models with ImageNet weights before MoCo pre-training incurs no additional cost.
We update MoCo to be customized for histopathological images. Specifically, the data augmentation techniques commonly used in self-supervised learning for natural images may not be suitable for histopathological images. For example, color jittering and random grayscale can significantly alter the staining variations in histopathological images. Therefore, we disable these augmentations. Instead, we introduce multi-scale cropping to simulate tissue views at different magnification levels. In addition, the queue length was set to 216 (larger than the standard MoCo setting), with a momentum update coefficient of 0.999 to better maintain representation consistency across different batches, which is particularly important for subtle feature differences in histopathological images. The detailed experimental parameter settings for HistoMoCo include a batch size of 256, a learning rate of 0.03, a feature dimension of 128, a queue size of 2 16 , and a MoCo momentum of 0.999.
We observed that using higher-resolution images significantly increases GPU memory consumption during model training. For example, when using ResNet-50 as the backbone with a batch size of 256, even when fine-tuning only the linear layer, images with a resolution of 512 × 512 require 6040 MB of memory, while those with a resolution of 1024 × 1024 demand 18,904 MB. This imposes a substantial computational burden [42]. Therefore, we downsampled the images to a resolution of 224 × 224 in both the pre-training and fine-tuning stages to account for computational resource constraints and ensure compatibility with the standard MoCo implementation.
Two encoder networks, ResNet-18 and ResNet-50 [43], were used to evaluate our consistency across model architectures. In downstream tasks, we evaluated the model performance in both Linear Tuning and End-to-End Tuning settings.

4.3. HistoMoCo Fine-Tuning

We perform fine-tuning on NCT-CRC-HE-100K [36] and two OSCC detection datasets, cluding NDB-UFES [38] and Oral Histopathology [39]. Specifically, we partition these three datasets, with the detailed division ratios presented in Figure 3. Notably, the 80% of NCT-CRC-HE-100K [36] used for fine-tuning is identical to the 80% previously used for pre-training. This setup ensures that the validation and test sets of NCT-CRC-HE-100K [36] remain unseen during the training process. Figure 3 shows the overall training process using HistoMoCo pre-training and subsequent fine-tuning.
We retained the model checkpoint with the best performance over 100 training epochs based on the AUROC metric (using macro-AUROC for NCT-CRC-HE-100K, trained for 10 epochs) on the validation set for evaluation on the test set. To evaluate the transferability of the representations, as shown in Figure 2b, we froze the encoder model and trained a linear classifier on top of it using labeled data (Linear Tuning). Additionally, we unfroze all layers and fine-tuned the entire model end-to-end using labeled data to assess the overall transferability of the performance (End-to-End Tuning). For Linear Tuning, a learning rate of 30 and a batch size of 256 were used, whereas for End-to-End Tuning, the learning rate was decreased to 10 2 .

5. Experiments

5.1. Statistics of the Datasets

We use the NCT-CRC-HE-100K [36], EBHI-Seg [37], NDB-UFES [38], and Oral Histopathology [39] datasets for the pre-training, fine-tuning, and evaluation of HistoMoCo. The statistical information of these datasets is shown in Table 1, along with their classification/segmentation targets as follows:
  • NCT-CRC-HE-100K [36]: Nine tissue types: adipose, background, debris, lymphocytes, mucus, smooth muscle, normal colon mucosa, cancer-associated stroma, colorectal adenocarcinoma epithelium.
  • EBHI-Seg [37]: Six tumor differentiation stages: normal, polyp, low-grade intraepithelial neoplasia, high-grade intraepithelial neoplasia, serrated adenoma, and adenocarcinoma.
  • NDB-UFES [38]: Two types: oral squamous cell carcinoma, normal epithelium.
  • Oral Histopathology [39]: Two types: oral squamous cell carcinoma, normal epithelium.

5.2. Experimental Setups

5.2.1. Baseline Models

We propose the following baselines for comparison:
  • Scratch: models are initialized using Kaiming’s random initialization [44] and then fine-tuned on the target datasets.
  • ImageNet: models are initialized with the officially released weights pre-trained on the ImageNet dataset and fine-tuned on the target datasets.
  • MoCo: models are initialized with the officially released weights (https://github.com/facebookresearch/moco (accessed on 12 March 2025)) from [17] and fine-tuned on the target datasets.

5.2.2. Evaluation Metrics

To evaluate these algorithms, we conduct a comparison of their performance by assessing and contrasting Macro-AUROC (MaROC.), Micro-AUROC (MiROC.), Macro-AUPRC (MaPRC.), Micro-AUPRC (MiPRC.), and Accuracy (Acc.) for multi-class classification task (colorectal tissue classification on NCT-CRC-HE-100K [36]), AUROC (ROC.), AUPRC (PRC.), Accuracy (Acc.), Precision (Pre.), Sensitivity (Sen.), Specificity (Spe.) and F1 Score (F1) for binary classification task (OSCC detection on NDB-UFES [38] and Oral Histopathology [39]).

5.3. Experimental Results

5.3.1. Transfer Performance of HistoMoCo Representations

We investigated whether representations acquired through HistoMoCo pre-training are of higher quality than those transferred from Scratch, ImageNet, and MoCo [40]. To evaluate these representations, we used the Linear Tuning protocol [45,46], where a linear classifier is trained on a frozen base model, and test performance is used as a proxy for representation quality. We also explored whether HistoMoCo pre-training translates to higher model performance by conducting End-to-End Tuning. The results are shown in Table 2. It can be observed that with ResNet-50 as the architecture, although the performance of ResNet-50 models on the NCT-CRC-HE-100K dataset is close to saturation, HistoMoCo still shows significant improvements compared to ImageNet and MoCo, especially in terms of AUPRC and accuracy metrics. This indirectly demonstrates the stronger robustness of HistoMoCo pre-training parameters for imbalanced positive and negative samples. ResNet-18 also reflects similar patterns.

5.3.2. Transfer Benefit of HistoMoCo on OSCC Detection Task

We conducted experiments to test whether the HistoMoCo pre-trained histopathological image representations obtained from the colorectal dataset could be transferred to two OSCC detection datasets (NDB-UFES and Oral Histopathology datasets). Consistent with the previous section, the experiments were based on both Linear Tuning protocol and End-to-End Tuning settings, with results shown in Table 3 and Table 4. It can be observed that HistoMoCo achieved the overall best performance on both external datasets. Notably, the Linear Tuning showed significant improvements in key metrics such as AUROC and AUPRC compared to ImageNet pre-training. Under End-to-End Tuning, even ImageNet pre-training can achieve over 99% ROC and PRC on the OSCC detection task of the NDB-UFES dataset when using ResNet-50. This suggests that the simplicity of this dataset makes it easy to attain high performance, making it challenging to HistoMoCo. However, HistoMoCo still outperformed other baselines overall. Given the small data volume, we believe that the occasional deviation in individual metrics is acceptable. The results on external datasets confirm the generalizability of HistoMoCo to out-of-domain data from different tissue sites, suggesting that it can serve as a more robust pre-training foundation for subsequent histopathological image research. Figure 4 presents the ROC curves of ResNet-50 using various pre-training algorithms on two OSCC detection tasks, 95% confidence intervals were estimated using bootstrapping with 100 independent trials on the testing sets.
Notably, initializing the backbone model from scratch and performing Linear Tuning resulted in highly biased outcomes, with both precision and sensitivity being 0. This indicates that, under this setting, the model exhibited an extreme tendency to classify all images as normal. The primary reason for this is that the scratch initialization did not provide the backbone model with any features relevant to histopathological images. Consequently, the model failed to generate meaningful feature representations for the subsequent linear classification layer, leading to such extreme results.

5.3.3. Performance Comparison with Similar Literature

We further introduce UNI [30], CONCH [31], DINO [47], and SwAV [16] as baseline models to evaluate the effectiveness of HistoMoCo. Specifically, we utilized UNI [30] and CONCH [31] as feature extractors and performed Linear Tuning experiments on the OSCC detection task of the NDB-UFES dataset (the larger dataset among the two OSCC detection datasets we used to ensure the reliability of the results). The experimental results are shown in Table 5. The experimental results demonstrate that both UNI2-h and CONCH achieved outstanding performance after extensive pre-training on large-scale histopathological image datasets. This further validates our hypothesis that pre-training should be conducted on data distributions consistent with the downstream task rather than on distributionally mismatched data, which could introduce biases. In comparison with UNI2-h, we openly acknowledge the performance limitations of HistoMoCo. UNI2-h achieved the best results in the majority of cases, including ROC, PRC, accuracy, and sensitivity. However, HistoMoCo remains noteworthy, as it achieved the best F1-score and maintained the second-best performance in most cases. More importantly, both UNI2-h and CONCH adopt the Vision Transformer [48] as model backbone, which, while effective, introduces a significant number of trainable parameters (630.76M for UNI2-h and 86.57M for CONCH). In contrast, HistoMoCo employs ResNet-50 [43] with only 25.56M parameters. Notably, when compared to CONCH, HistoMoCo achieved superior performance with fewer parameters. We believe that HistoMoCo strikes a balance between performance and computational efficiency, addressing an existing gap in current research.
We implemented DINO [47] and SwAV [16] methods and compared them under the same experimental settings as MoCo, performing Linear Tuning and End-to-End Tuning experiments on the OSCC detection task of the NDB-UFES dataset. The experimental results are shown in Table 5. The experimental results indicate that HistoMoCo outperforms these state-of-the-art (SOTA) models, which is consistent with our hypothesis. Since both DINO and SwAV use natural images for pre-training, the divergence between natural images and histopathological images hinders the generalization of these models to downstream tasks.

5.3.4. Ablation Study

We note that the pre-training dataset, NCT-CRC-HE-100K, consists of histopathological images of colorectal cancer, whereas our target task focuses on OSCC detection. However, OSCC and colorectal cancer exhibit significant anatomical and morphological differences. Therefore, we conduct an ablation study on NCT-CRC-HE-100K to evaluate its effectiveness during the pretraining stage. The experimental results, as presented in Table 6, indicate that the removal of the NCT-CRC-HE-100K dataset leads to a significant performance drop in both Linear Tuning and End-to-End Tuning. We attribute this to HistoMoCo learning relevant histopathological representations from the NCT-CRC-HE-100K dataset, which are subsequently utilized for OSCC detection. Ideally, we believe that using only OSCC images for pre-training would yield the best results. However, due to the scarcity of such data, we resorted to colorectal cancer histopathological images as a related alternative to minimize the distribution gap between pre-training and fine-tuning datasets.

5.3.5. Sensitivity Analysis

We further analyze several hyperparameters in HistoMoCo, including the momentum update rate (moco-m), the queue size (moco-k), and projection head size (moco-dim), with the experimental results presented in Table 7, Table 8 and Table 9. In this experiment, we adhere to the controlled variable method, analyzing only a single hyperparameter in each experiment. When varying a specific hyperparameter, all other hyperparameters are set to their default values as specified in the methodology, i.e., momo-m = 0.999, moco-k = 2 16 , and moco-dim = 128 (see Section 4.2). The results indicate that the hyperparameters queue size and projection head size have minimal impact on the final model performance, particularly in the case of End-to-End Tuning. However, the momentum update rate proves to be a crucial parameter. Our experiments suggest that selecting a relatively large momentum update rate (e.g., 0.999) is essential for maintaining a robust MoCo pre-training process.

5.3.6. Robustness of HistoMoCo Against Data Insufficiency

We evaluate the robustness of HistoMoCo against insufficient training samples using the NDB-UFES Dataset with ResNet-18. To simulate data insufficiency, we reduce the image size of the training set from 80% to 50%/40%/30%/20%/10% of the entire dataset, while keeping the test set fixed for a fair comparison. Experiments are conducted across these different settings, and the performances of AUROC and AUPRC are plotted in Figure 5. As shown in Figure 5, HistoMoCo consistently outperforms the ImageNet pre-training solution in all settings. More importantly, with Linear Tuning, ImageNet achieved an AUROC of 89.32% using 40% of the training data, whereas HistoMoCo achieved an AUROC of 89.58% using only 10% of the training data. Similarly, with End-to-End Tuning, ImageNet reached an AUROC of 97.08% using 50% of the training data, while HistoMoCo attained an AUROC of 97.16% using only 40% of the training data. This further demonstrates that HistoMoCo enables deep learning models to achieve higher performance with less training data, reducing reliance on manually labeled data.

6. Conclusions

We introduce HistoMoCo, a method that provides high-quality representations and transferable initializations for interpreting histopathological images, tackling the challenge of distributional divergence between histopathological and natural images. Despite the substantial differences in data and task properties between natural image classification and histopathological interpretation, HistoMoCo successfully adapts the MoCo pre-training approach to histopathological images. This highlights the potential for self-supervised methods to extend beyond natural image classification contexts.
We validated the effectiveness of HistoMoCo across various histopathological images, using the pre-training dataset NCT-CRC-HE-100K and two oral OSCC datasets: Oral Histopathology and NDB-UFES. Extensive experiments demonstrated the model’s generalization capability. Notably, the images in the external oral OSCC datasets were not included in the unsupervised pre-training of HistoMoCo and differed in anatomical sites, as the pre-training images were sourced from the rectum and colon.

Author Contributions

Methodology, W.L. and J.Z.; Formal analysis, Y.H. and B.J.; Writing—original draft, W.L.; Writing—review & editing, W.L.; Visualization, W.L.; Project administration, M.G. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. U23A20468).

Data Availability Statement

These data were derived from the following resources available in the public domain: [A histopathological image repository of normal epithelium of Oral Cavity and Oral Squamous Cell Carcinoma, https://data.mendeley.com/datasets/ftmp4cvtmb/1 (accessed on 12 March 2025); NDB-UFES: An oral cancer and leukoplakia dataset composed of histopathological images and patient data, https://data.mendeley.com/datasets/bbmmm4wgr8/4 (accessed on 12 March 2025); NCT-CRC-HE-100K, https://zenodo.org/records/1214456 (accessed on 12 March 2025); EBHI-Seg, https://figshare.com/articles/dataset/EBHI-SEG/21540159/1?file=38179080 (accessed on 12 March 2025)]. The complete datasets, code and pre-trained parameters are publicly accessible and documented publicly on Github: https://github.com/Heyffff/HistoMoCo (accessed on 12 March 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mohan, R.; Rama, A.; Raja, R.K.; Shaik, M.R.; Khan, M.; Shaik, B.; Rajinikanth, V. OralNet: Fused optimal deep features framework for oral squamous cell carcinoma detection. Biomolecules 2023, 13, 1090. [Google Scholar] [CrossRef] [PubMed]
  2. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2020, 70, 313. [Google Scholar]
  3. Van der Laak, J.; Litjens, G.; Ciompi, F. Deep learning in histopathology: The path to the clinic. Nat. Med. 2021, 27, 775–784. [Google Scholar] [CrossRef] [PubMed]
  4. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
  5. Fu, Q.; Chen, Y.; Li, Z.; Jing, Q.; Hu, C.; Liu, H.; Bao, J.; Hong, Y.; Shi, T.; Li, K.; et al. A deep learning algorithm for detection of oral cavity squamous cell carcinoma from photographic images: A retrospective study. EClinicalMedicine 2020, 27, 100558. [Google Scholar] [CrossRef]
  6. Albalawi, E.; Thakur, A.; Ramakrishna, M.T.; Bhatia Khan, S.; SankaraNarayanan, S.; Almarri, B.; Hadi, T.H. Oral squamous cell carcinoma detection using EfficientNet on histopathological images. Front. Med. 2024, 10, 1349336. [Google Scholar] [CrossRef]
  7. Gao, J.; Zhu, Y.; Wang, W.; Wang, Z.; Dong, G.; Tang, W.; Wang, H.; Wang, Y.; Harrison, E.M.; Ma, L. A comprehensive benchmark for COVID-19 predictive modeling using electronic health records in intensive care. Patterns 2024, 5, 100951. [Google Scholar] [CrossRef]
  8. Ma, L.; Zhang, C.; Gao, J.; Jiao, X.; Yu, Z.; Zhu, Y.; Wang, T.; Ma, X.; Wang, Y.; Tang, W.; et al. Mortality prediction with adaptive feature importance recalibration for peritoneal dialysis patients. Patterns 2023, 4, 100892. [Google Scholar] [CrossRef]
  9. Claudio Quiros, A.; Coudray, N.; Yeaton, A.; Yang, X.; Liu, B.; Le, H.; Chiriboga, L.; Karimkhan, A.; Narula, N.; Moore, D.A.; et al. Mapping the landscape of histomorphological cancer phenotypes using self-supervised learning on unannotated pathology slides. Nat. Commun. 2024, 15, 4596. [Google Scholar] [CrossRef]
  10. Ananthakrishnan, B.; Shaik, A.; Kumar, S.; Narendran, S.; Mattu, K.; Kavitha, M.S. Automated detection and classification of oral squamous cell carcinoma using deep neural networks. Diagnostics 2023, 13, 918. [Google Scholar] [CrossRef]
  11. Kavyashree, C.; Vimala, H.; Shreyas, J. Improving oral cancer detection using pretrained model. In Proceedings of the 2022 IEEE 6th Conference on Information and Communication Technology (CICT), Gwalior, India, 18–20 November 2022; pp. 1–5. [Google Scholar]
  12. Redie, D.K.; Bilgaiyan, S.; Sagnika, S. Oral cancer detection using transfer learning-based framework from histopathology images. J. Electron. Imaging 2023, 32, 053004. [Google Scholar] [CrossRef]
  13. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
  14. Li, Z.; Tang, H.; Peng, Z.; Qi, G.J.; Tang, J. Knowledge-guided semantic transfer network for few-shot image recognition. IEEE Trans. Neural Netw. Learn. Syst. 2023. [Google Scholar] [CrossRef]
  15. Liu, C.; Fu, Y.; Xu, C.; Yang, S.; Li, J.; Wang, C.; Zhang, L. Learning a few-shot embedding model with contrastive learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 2–9 February 2021; Volume 35, pp. 8635–8643. [Google Scholar]
  16. Caron, M.; Misra, I.; Mairal, J.; Goyal, P.; Bojanowski, P.; Joulin, A. Unsupervised learning of visual features by contrasting cluster assignments. Adv. Neural Inf. Process. Syst. 2020, 33, 9912–9924. [Google Scholar]
  17. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 9729–9738. [Google Scholar]
  18. Misra, I.; Maaten, L.V.d. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 6707–6717. [Google Scholar]
  19. Sowrirajan, H.; Yang, J.; Ng, A.Y.; Rajpurkar, P. Moco pretraining improves representation and transferability of chest X-ray models. In Proceedings of the Medical Imaging with Deep Learning, Lübeck, Germany, 7–9 July 2021; pp. 728–744. [Google Scholar]
  20. Liao, W.; Xiong, H.; Wang, Q.; Mo, Y.; Li, X.; Liu, Y.; Chen, Z.; Huang, S.; Dou, D. Muscle: Multi-task self-supervised continual learning to pre-train deep models for X-ray images of multiple body parts. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; Springer: Berlin/Heidelberg, Germany; pp. 151–161. [Google Scholar]
  21. Liao, W.; Wang, Q.; Li, X.; Liu, Y.; Chen, Z.; Huang, S.; Dou, D.; Xu, Y.; Xiong, H. MTPret: Improving X-ray Image Analytics with Multi-Task Pre-training. IEEE Trans. Artif. Intell. 2024, 5, 4799–4812. [Google Scholar] [CrossRef]
  22. Chaitanya, K.; Erdil, E.; Karani, N.; Konukoglu, E. Contrastive learning of global and local features for medical image segmentation with limited annotations. Adv. Neural Inf. Process. Syst. 2020, 33, 12546–12558. [Google Scholar]
  23. Dufumier, B.; Gori, P.; Victor, J.; Grigis, A.; Wessa, M.; Brambilla, P.; Favre, P.; Polosan, M.; Mcdonald, C.; Piguet, C.M.; et al. Contrastive learning with continuous proxy meta-data for 3D MRI classification. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part II 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 58–68. [Google Scholar]
  24. Hollon, T.; Jiang, C.; Chowdury, A.; Nasir-Moin, M.; Kondepudi, A.; Aabedi, A.; Adapa, A.; Al-Holou, W.; Heth, J.; Sagher, O.; et al. Artificial-intelligence-based molecular classification of diffuse gliomas using rapid, label-free optical imaging. Nat. Med. 2023, 29, 828–832. [Google Scholar] [CrossRef]
  25. Nakhli, R.; Darbandsari, A.; Farahani, H.; Bashashati, A. Ccrl: Contrastive cell representation learning. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 397–407. [Google Scholar]
  26. Wu, H.; Wang, Z.; Song, Y.; Yang, L.; Qin, J. Cross-patch dense contrastive learning for semi-supervised segmentation of cellular nuclei in histopathologic images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022; pp. 11666–11675. [Google Scholar]
  27. Yang, M.; Yang, Y.; Xie, C.; Ni, M.; Liu, J.; Yang, H.; Mu, F.; Wang, J. Contrastive learning enables rapid mapping to multimodal single-cell atlas of multimillion scale. Nat. Mach. Intell. 2022, 4, 696–709. [Google Scholar] [CrossRef]
  28. Li, J.; Zheng, Y.; Wu, K.; Shi, J.; Xie, F.; Jiang, Z. Lesion-aware contrastive representation learning for histopathology whole slide images analysis. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 273–282. [Google Scholar]
  29. Wang, X.; Yang, S.; Zhang, J.; Wang, M.; Zhang, J.; Yang, W.; Huang, J.; Han, X. Transformer-based unsupervised contrastive learning for histopathological image classification. Med Image Anal. 2022, 81, 102559. [Google Scholar] [CrossRef]
  30. Chen, R.J.; Ding, T.; Lu, M.Y.; Williamson, D.F.; Jaume, G.; Song, A.H.; Chen, B.; Zhang, A.; Shao, D.; Shaban, M.; et al. Towards a general-purpose foundation model for computational pathology. Nat. Med. 2024, 30, 850–862. [Google Scholar] [CrossRef]
  31. Lu, M.Y.; Chen, B.; Williamson, D.F.; Chen, R.J.; Liang, I.; Ding, T.; Jaume, G.; Odintsov, I.; Le, L.P.; Gerber, G.; et al. A visual-language foundation model for computational pathology. Nat. Med. 2024, 30, 863–874. [Google Scholar] [CrossRef]
  32. Vorontsov, E.; Bozkurt, A.; Casson, A.; Shaikovski, G.; Zelechowski, M.; Liu, S.; Severson, K.; Zimmermann, E.; Hall, J.; Tenenholtz, N.; et al. Virchow: A million-slide digital pathology foundation model. arXiv 2023, arXiv:2309.07778. [Google Scholar]
  33. Xiang, J.; Wang, X.; Zhang, X.; Xi, Y.; Eweje, F.; Chen, Y.; Li, Y.; Bergstrom, C.; Gopaulchan, M.; Kim, T.; et al. A vision–language foundation model for precision oncology. Nature 2025, 638, 769–778. [Google Scholar] [CrossRef]
  34. Wu, L.; Fang, L.; He, X.; He, M.; Ma, J.; Zhong, Z. Querying labeled for unlabeled: Cross-image semantic consistency guided semi-supervised semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 8827–8844. [Google Scholar] [CrossRef] [PubMed]
  35. Cao, Z.; Xu, L.; Chen, D.Z.; Gao, H.; Wu, J. A robust shape-aware rib fracture detection and segmentation framework with contrastive learning. IEEE Trans. Multimed. 2023, 25, 1584–1591. [Google Scholar] [CrossRef]
  36. Kather, J.N.; Krisam, J.; Charoentong, P.; Luedde, T.; Herpel, E.; Weis, C.A.; Gaiser, T.; Marx, A.; Valous, N.A.; Ferber, D.; et al. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Med. 2019, 16, e1002730. [Google Scholar] [CrossRef]
  37. Shi, L.; Li, X.; Hu, W.; Chen, H.; Chen, J.; Fan, Z.; Gao, M.; Jing, Y.; Lu, G.; Ma, D.; et al. EBHI-Seg: A Novel Enteroscope Biopsy Histopathological Haematoxylin and Eosin Image Dataset for Image Segmentation Tasks. arXiv 2022, arXiv:2212.00532. [Google Scholar] [CrossRef]
  38. Ribeiro-de Assis, M.C.F.; Soares, J.P.; de Lima, L.M.; de Barros, L.A.P.; Grão-Velloso, T.R.; Krohling, R.A.; Camisasca, D.R. NDB-UFES: An oral cancer and leukoplakia dataset composed of histopathological images and patient data. Data Brief 2023, 48, 109128. [Google Scholar] [CrossRef]
  39. Rahman, T.Y. A histopathological image repository of normal epithelium of oral cavity and oral squamous cell carcinoma. Mendeley Data 2019, 1. [Google Scholar] [CrossRef]
  40. Chen, X.; Fan, H.; Girshick, R.; He, K. Improved baselines with momentum contrastive learning. arXiv 2020, arXiv:2003.04297. [Google Scholar]
  41. Raghu, M.; Zhang, C.; Kleinberg, J.; Bengio, S. Transfusion: Understanding transfer learning for medical imaging. In Advances in Neural Information Processing Systems; Springer: Berlin/Heidelberg, Germany, 2019; Volume 32. [Google Scholar]
  42. Lu, S.; Chen, Y.; Chen, Y.; Li, P.; Sun, J.; Zheng, C.; Zou, Y.; Liang, B.; Li, M.; Jin, Q.; et al. General lightweight framework for vision foundation model supporting multi-task and multi-center medical image analysis. Nat. Commun. 2025, 16, 2097. [Google Scholar] [CrossRef]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  44. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  45. Bachman, P.; Hjelm, R.D.; Buchwalter, W. Learning representations by maximizing mutual information across views. Adv. Neural Inf. Process. Syst. 2019, 32. Available online: https://dl.acm.org/doi/abs/10.5555/3454287.3455679 (accessed on 12 March 2025).
  46. Kornblith, S.; Shlens, J.; Le, Q.V. Do better imagenet models transfer better? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2661–2671. [Google Scholar]
  47. Caron, M.; Touvron, H.; Misra, I.; Jégou, H.; Mairal, J.; Bojanowski, P.; Joulin, A. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 9650–9660. [Google Scholar]
  48. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
Figure 1. (a) Eamples of natural images (from the ImageNet dataset) and histopathological images (from three real-world datasets). (b) The divergence between visual domains of natural images and histopathological images.
Figure 1. (a) Eamples of natural images (from the ImageNet dataset) and histopathological images (from three real-world datasets). (b) The divergence between visual domains of natural images and histopathological images.
Electronics 14 01252 g001
Figure 2. The framework of HistoMoCo. (a) HistoMoCo maximizes agreement of embeddings generated by different augmentations of the same histological image. (b) Two fine-tuning solutions used in HistoMoCo. HistoMoCo employs ResNet-18 and ResNet-50 as encoders, using a linear classifier as the decoder.
Figure 2. The framework of HistoMoCo. (a) HistoMoCo maximizes agreement of embeddings generated by different augmentations of the same histological image. (b) Two fine-tuning solutions used in HistoMoCo. HistoMoCo employs ResNet-18 and ResNet-50 as encoders, using a linear classifier as the decoder.
Electronics 14 01252 g002
Figure 3. HistoMoCo training pipeline. MoCo acts as a self-supervised training agent. The model is subsequently fine-tuned using the NCT dataset and two other external OSCC datasets.
Figure 3. HistoMoCo training pipeline. MoCo acts as a self-supervised training agent. The model is subsequently fine-tuned using the NCT dataset and two other external OSCC datasets.
Electronics 14 01252 g003
Figure 4. Receiver operating characteristic (ROC) curves with AUC values (95% CI) for two OSCC detection tasks on the NDB-UFES and Oral Histopathology datasets.
Figure 4. Receiver operating characteristic (ROC) curves with AUC values (95% CI) for two OSCC detection tasks on the NDB-UFES and Oral Histopathology datasets.
Electronics 14 01252 g004
Figure 5. AUROC on the NDB-UFES dataset for ResNet-18 with ImageNet and HistoMoCo pre-training using various sizes of fine-tuning datasets. HistoMoCo achieves higher performance than ImageNet using a smaller dataset.
Figure 5. AUROC on the NDB-UFES dataset for ResNet-18 with ImageNet and HistoMoCo pre-training using various sizes of fine-tuning datasets. HistoMoCo achieves higher performance than ImageNet using a smaller dataset.
Electronics 14 01252 g005
Table 1. Statistics of the histopathological datasets used in HistoMoCo.
Table 1. Statistics of the histopathological datasets used in HistoMoCo.
DatasetResolutionTaskPatientsImagesTargets
Used for pre-training of HistoMoCo
NCT-CRC-HE-100K [36]224 × 224Classifaction86100,0009
EBHI-Seg [37]224 × 224SegmentationN/A22286
Used for fine-tuning of HistoMoCo
NDB-UFES [38]512 × 512Classifaction23737632
Oral Histopathology [39]2048 × 1536ClassifactionN/A12242
Table 2. Model performance on colorectal tissue classification task of NCT-CRC-HE-100K dataset. In each experimental setup, the best-performing results are shown in bold, and the second-best results are underlined with underline. When using ResNet-50 as the backbone with End-to-End Tuning, HistoMoCo achieves a 100% Macro-AUROC and Micro-AUROC. This is due to rounding to three decimal places. In fact, under this setting, HistoMoCo achieves a Macro-AUROC of 0.999746 and a Micro-AUROC of 0.999557.
Table 2. Model performance on colorectal tissue classification task of NCT-CRC-HE-100K dataset. In each experimental setup, the best-performing results are shown in bold, and the second-best results are underlined with underline. When using ResNet-50 as the backbone with End-to-End Tuning, HistoMoCo achieves a 100% Macro-AUROC and Micro-AUROC. This is due to rounding to three decimal places. In fact, under this setting, HistoMoCo achieves a Macro-AUROC of 0.999746 and a Micro-AUROC of 0.999557.
EncoderFine-TuningPre-TrainingMaROC.MiROC.MaPRC.MiPRC.Acc.
ResNet-18LinearScratch0.8880.8870.5080.4530.562
ImageNet0.9860.9820.9200.8950.919
HistoMoCo0.9820.9840.8900.8980.985
ResNet-18End-to-EndScratch0.9530.9610.7330.7550.887
ImageNet0.9970.9980.9780.9860.987
HistoMoCo0.9980.9980.9890.9860.991
ResNet-50LinearScratch0.7940.5720.3790.1420.105
ImageNet0.9920.9870.9510.9260.912
MoCo0.9940.9950.9630.9680.965
HistoMoCo0.9980.9990.9900.9900.992
ResNet-50End-to-EndScratch0.9700.9670.8240.8170.879
ImageNet0.9990.9990.9930.9950.991
MoCo0.9980.9990.9910.9920.986
HistoMoCo1.0001.0000.9980.9970.994
Table 3. Model performance on OSCC detection task of NDB-UFES dataset. In each experimental setup, the highest-performing results are highlighted in bold, while the second-best results are marked with an underline.
Table 3. Model performance on OSCC detection task of NDB-UFES dataset. In each experimental setup, the highest-performing results are highlighted in bold, while the second-best results are marked with an underline.
EncoderFine-TuningPre-TrainingROC.PRC.Acc.Pre.Sen.Spe.F1
ResNet-18LinearScratch0.7370.8690.2980.0000.0001.0000.000
ImageNet0.9170.9620.8550.8740.9270.6840.900
HistoMoCo0.9490.9780.8520.9370.8450.8660.889
ResNet-18End-to-EndScratch0.8640.9360.8120.8430.9000.6040.870
ImageNet0.9830.9930.9280.9380.9610.8500.949
HistoMoCo0.9910.9960.9470.9530.9730.8880.963
ResNet-50LinearScratch0.7130.8400.2980.0000.0001.0000.000
ImageNet0.9510.9740.8870.9280.9090.8340.918
MoCo0.9520.9790.8930.9190.9300.8070.924
HistoMoCo0.9790.9900.9350.9350.9480.9040.953
ResNet-50End-to-EndScratch0.8700.9410.8020.8500.8730.6360.861
ImageNet0.9930.9970.9570.9660.9730.9200.969
MoCo0.9910.9960.9510.9570.9730.8980.965
HistoMoCo0.9940.9970.9570.9640.9750.9140.969
Table 4. Model performance on OSCC detection task of oral histopathology dataset. In each experimental setup, the best-performing results are highlighted in bold, while the second-best results are underlined with an underline.
Table 4. Model performance on OSCC detection task of oral histopathology dataset. In each experimental setup, the best-performing results are highlighted in bold, while the second-best results are underlined with an underline.
EncoderFine-TuningPre-TrainingROC.PRC.Acc.Pre.Sen.Spe.F1
ResNet-18LinearScratch0.7280.8880.7380.8690.7680.6450.816
ImageNet0.8340.9390.7700.8110.9050.3550.856
HistoMoCo0.8550.9540.7940.8050.9580.2900.875
ResNet-18End-to-EndScratch0.8620.9560.8020.8120.9580.3230.879
ImageNet0.9240.9770.8570.8890.9260.6450.907
HistoMoCo0.9480.9840.8650.9060.9160.7100.911
ResNet-50LinearScratch0.7210.8760.2460.0000.0001.0000.000
ImageNet0.8480.9450.7700.8750.8110.6450.842
MoCo0.9210.9730.7780.7721.0000.0970.872
HistoMoCo0.9150.9690.8100.8320.9370.4190.881
ResNet-50End-to-EndScratch0.8590.9560.7860.8270.9050.4190.864
ImageNet0.8820.9580.8250.8760.8950.6130.885
MoCo0.9390.9800.8810.8920.9580.6450.924
HistoMoCo0.9410.9820.8730.8830.9580.6130.919
Table 5. Model performance on OSCC detection task of NDB-UFES dataset using HistoMoCo and SOTA models. In each experimental setup, the highest-performing results are highlighted in bold, while the second-best results are marked with an underline.
Table 5. Model performance on OSCC detection task of NDB-UFES dataset using HistoMoCo and SOTA models. In each experimental setup, the highest-performing results are highlighted in bold, while the second-best results are marked with an underline.
MethodsEncoderParam.ROC.PRC.Acc.Pre.Sen.Spe.F1
Linear Tuning
UNI2-h [30]ViT-h/14-reg8630.76M0.9820.9920.9360.9260.9720.9020.948
CONCH [31]ViT-b/1686.57M0.9760.9840.9300.9250.9630.9020.944
DINO [47]ResNet-5025.56M0.9700.9850.9170.9600.9200.9090.940
SwAV [16]ResNet-5025.56M0.9580.9810.8960.8960.9640.7380.929
HistoMoCoResNet-5025.56M0.9790.9900.9350.9350.9480.9040.953
End-to-End Tuning
DINO [47]ResNet-5025.56M0.9750.9880.9250.9700.9300.9200.940
SwAV [16]ResNet-5025.56M0.9620.9860.9050.9400.9250.8500.930
HistoMoCoResNet-5025.56M0.9940.9970.9570.9640.9750.9140.969
Table 6. Ablation study of NCT-CRC-HE-100K in HistoMoCo. In each experimental setup, the better-performing results are highlighted in bold.
Table 6. Ablation study of NCT-CRC-HE-100K in HistoMoCo. In each experimental setup, the better-performing results are highlighted in bold.
MethodsROC.PRC.Acc.Pre.Sen.Spe.F1
Linear Tuning
HistoMoCow/o NCT-CRC-HE-100K0.9460.9730.8510.9370.8450.8660.889
HistoMoCo0.9790.9900.9350.9350.9480.9040.953
End-to-End Tuning
HistoMoCow/o NCT-CRC-HE-100K0.9830.9900.9450.9530.9730.8880.963
HistoMoCo0.9940.9970.9570.9640.9750.9140.969
Table 7. Sensitivity analysis of the momentum update rate (moco-m) in HistoMoCo.
Table 7. Sensitivity analysis of the momentum update rate (moco-m) in HistoMoCo.
ROC.PRC.Acc.Pre.Sen.Spe.F1
Linear Tuning
moco-m = 0.9990.9790.9900.9350.9350.9480.9040.953
moco-m = 0.90.9710.9870.9280.9480.9500.8770.949
moco-m = 0.50.5990.7340.7030.7031.0000.0050.826
moco-m = 0.10.6460.7850.7050.7160.9610.1020.821
End-to-End Tuning
moco-m = 0.9990.9940.9970.9570.9640.9750.9140.969
moco-m = 0.90.9960.9980.9680.9730.9820.9360.977
moco-m = 0.50.7450.8750.7420.7770.8860.4010.828
moco-m = 0.10.7640.8890.7510.7780.9020.3960.836
Table 8. Sensitivity analysis of the queue size (moco-k) in HistoMoCo.
Table 8. Sensitivity analysis of the queue size (moco-k) in HistoMoCo.
ROC.PRC.Acc.Pre.Sen.Spe.F1
Linear Tuning
moco-k = 2160.9790.9900.9350.9350.9480.9040.953
moco-k = 2140.9830.9930.9510.9790.9500.9520.964
moco-k = 2120.9770.9890.9300.9560.9430.8980.950
moco-k = 2100.9820.9920.9330.9590.9450.9040.952
End-to-End Tuning
moco-k = 2160.9940.9970.9570.9640.9750.9140.969
moco-k = 2140.9940.9980.9620.9640.9820.9140.973
moco-k = 2120.9940.9980.960.9730.970.9360.972
moco-k = 2100.9940.9980.9650.9730.9770.9360.975
Table 9. Sensitivity analysis of the projection head size (moco-dim) in HistoMoCo.
Table 9. Sensitivity analysis of the projection head size (moco-dim) in HistoMoCo.
ROC.PRC.Acc.Pre.Sen.Spe.F1
Linear Tuning
moco-dim = 640.9810.9920.9360.9570.9520.8980.954
moco-dim = 1280.9790.9900.9350.9350.9480.9040.953
moco-dim = 2560.9810.9920.9360.9590.9500.9040.954
End-to-End Tuning
moco-dim = 640.9950.9980.9630.9640.9840.9140.974
moco-dim = 1280.9940.9970.9570.9640.9750.9140.969
moco-dim = 2560.9940.9980.9630.9790.9680.9520.974
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, W.; He, Y.; Jiang, B.; Zhao, J.; Gao, M.; Zhang, X. HistoMoCo: Momentum Contrastive Learning Pre-Training on Unlabeled Histopathological Images for Oral Squamous Cell Carcinoma Detection. Electronics 2025, 14, 1252. https://doi.org/10.3390/electronics14071252

AMA Style

Liao W, He Y, Jiang B, Zhao J, Gao M, Zhang X. HistoMoCo: Momentum Contrastive Learning Pre-Training on Unlabeled Histopathological Images for Oral Squamous Cell Carcinoma Detection. Electronics. 2025; 14(7):1252. https://doi.org/10.3390/electronics14071252

Chicago/Turabian Style

Liao, Weibin, Yifan He, Bowen Jiang, Junfeng Zhao, Min Gao, and Xiaoyun Zhang. 2025. "HistoMoCo: Momentum Contrastive Learning Pre-Training on Unlabeled Histopathological Images for Oral Squamous Cell Carcinoma Detection" Electronics 14, no. 7: 1252. https://doi.org/10.3390/electronics14071252

APA Style

Liao, W., He, Y., Jiang, B., Zhao, J., Gao, M., & Zhang, X. (2025). HistoMoCo: Momentum Contrastive Learning Pre-Training on Unlabeled Histopathological Images for Oral Squamous Cell Carcinoma Detection. Electronics, 14(7), 1252. https://doi.org/10.3390/electronics14071252

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop