Next Article in Journal
An Update on Single-Cell RNA Sequencing in Illuminating Disease Mechanisms of Cutaneous T-Cell Lymphoma
Previous Article in Journal
Expanding Immunotherapy Beyond CAR T Cells: Engineering Diverse Immune Cells to Target Solid Tumors
Previous Article in Special Issue
Predicting Pathogenic Variants of Breast Cancer Using Ultrasound-Derived Machine Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Tissue Detection in Whole-Slide Images Using Classical and Hybrid Methods: Benchmark on TCGA Cancer Cohorts

1
Faculty of Automatic Control and Computers, National University of Science and Technology POLITEHNICA Bucharest, 060042 Bucharest, Romania
2
Victor Babeş National Institute of Research and Development in Pathology & Biomedical Sciences, Carol Davila University of Medicine and Pharmacy, 050474 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Cancers 2025, 17(17), 2918; https://doi.org/10.3390/cancers17172918
Submission received: 22 July 2025 / Revised: 28 August 2025 / Accepted: 1 September 2025 / Published: 5 September 2025
(This article belongs to the Collection Artificial Intelligence and Machine Learning in Cancer Research)

Simple Summary

A crucial quality-control step in digital pathology is to identify tissue regions within a whole-slide image before deciding where AI models should operate. In this study, we introduce Double-Pass, a new annotation-free method that runs on standard CPUs and nearly matches the performance of a fully supervised UNet++ on 3322 annotated TCGA slides (mIoU 0.826 vs. 0.871) while processing each slide in just 0.20 s compared with the UNet++ model’s 2.43 s CPU inference. We also compared Double-Pass against classical Otsu and K-means methods to demonstrate its advantages. By providing a fast, label-free quality-control step, Double-Pass ensures that subsequent AI models operate only on relevant tissue regions without the burden of manual annotation.

Abstract

Background: Whole-slide images (WSIs) are crucial in pathology for digitizing tissue slides, enabling pathologists and AI models to analyze cancer patterns at gigapixel scale. However, their large size incorporates artifacts and non-tissue regions that slow AI processing, consume resources, and introduce errors like false positives. Tissue detection serves as the essential first step in WSI pipelines to focus on relevant areas, but deep learning detection methods require extensive manual annotations. Methods: This study benchmarks four thumbnail-level tissue detection methods—Otsu’s thresholding, K-Means clustering, our novel annotation-free Double-Pass hybrid, and GrandQC’s UNet++ on 3322 TCGA WSIs from nine cancer cohorts, evaluating accuracy, speed, and efficiency. Results: Double-Pass achieved an mIoU of 0.826—very close to the deep learning GrandQC model’s 0.871—while processing slides on a CPU in just 0.203 s per slide, markedly faster than GrandQC’s 2.431 s per slide on the same hardware. As an annotation-free, CPU-optimized method, it therefore enables efficient, scalable thumbnail-level tissue detection on standard workstations. Conclusions: The scalable, annotation-free Double-Pass pipeline reduces computational bottlenecks and facilitates high-throughput WSI preprocessing, enabling faster and more cost-effective integration of AI into clinical pathology and research workflows. Comparing Double-Pass against established methods, this benchmark demonstrates its novelty as a fast, robust and annotation-free alternative to supervised methods.

1. Introduction

Tissue detection is the critical first step in WSI pipelines, creating a mask to focus processing on relevant areas. In cancer research, this is especially vital due to heterogeneous staining (e.g., faint areas in necrotic tumors) and variability across scanners. Traditional methods are fast but fail on subtle tissues, while deep learning excels but demands annotated data—a challenge in digital pathology, where expert labeling is time-consuming and scarce for diverse cancers. Annotation burdens can delay projects, particularly in rare cancers where data is limited.
In this work, we introduce Double-Pass, a novel annotation-free hybrid method for tissue detection in WSIs, which combines two classical yet complementary strategies to enhance robustness while maintaining CPU-level efficiency. Unlike deep learning methods that require extensive annotations and GPU resources, Double-Pass is entirely unsupervised and achieves performance close to state-of-the-art models such as GrandQC’s UNet++. We benchmark our approach alongside three other methods: Otsu thresholding, K-Means clustering, and GrandQC, on 3322 annotated TCGA WSIs across nine cancer cohorts. Compared to prior tissue-detection benchmarks, our study evaluates a broader range of cancer types using publicly available tissue-versus-background masks released by GrandQC [1]. This design ensures that Double-Pass and other methods are evaluated on the same diverse dataset, highlighting their robustness and reproducibility across different cancers. By emphasizing accuracy, inference time, and resource efficiency, this study provides a comprehensive comparison and positions Double-Pass as a practical and scalable tool for preprocessing in digital pathology and cancer AI pipelines.

2. Literature Review

Tissue detection methods applied to WSIs images are essential for digital pathology workflows. They filter irrelevant regions to streamline AI analyses for any objective by training the model on multiple cancer types (like breast cancer, bladder cancer, etc). Classical methods, such as Otsu’s thresholding [2], are fast and annotation-free but struggle with cancer-specific challenges like variable staining in heterogeneous tumors. Song et al.’s EntropyMasker [3] improves on this for porous tissues, demonstrating higher sensitivity and Jaccard scores, but its performance on faint cancer edges remains limited. Chen and Yang’s tissueloc [4] combines grayscale thresholding with morphological operations for rapid localization, which is ideal for resource-constrained digital pathology labs.
Deep learning models offer superior robustness. Bándi et al.’s resolution-agnostic CNN [5] segments tissue across stains and scanners, generalizing well to diverse cancer types. Weng et al.’s GrandQC UNet++ [1] excels in multi-class artifact detection across TCGA, providing high precision for preprocessing in AI-driven cancer analyses. Wang et al. [6] applied CNNs for metastatic breast cancer detection, underscoring the need for accurate tissue masks to avoid false positives in lymph nodes.
Cancer-focused tools like TIAToolbox [7] integrate detection into end-to-end pipelines, yet annotation costs limit scalability. Foundation models like Phikon-v2 [8] and Virchow [9] pre-filter non-tissue, but their deep approaches demand resources. Hybrid methods, like our Double-Pass method, address this challenge by combining classical speed with robustness, without annotations, making them suitable for TCGA-like cancer datasets.
Specific pathology applications further illustrate the reliance on tissue detection as a critical first step. For instance, Ceachi et al. [10] developed an AI-based method for automatic identification of lymphovascular invasion in urothelial carcinomas, where initial tissue detection is essential to isolate relevant regions (e.g., tumor nests and vessels) from background artifacts in WSIs, enabling focused segmentation and reducing false positives in invasion detection. Similarly, Zurac et al. [11] proposed an AI approach for detecting Mycobacterium tuberculosis in Ziehl–Neelsen-stained tissue slides, relying on tissue detection to filter out non-tissue areas and artifacts, thus concentrating computational efforts on stained regions for accurate pathogen identification in tuberculosis diagnostics. These studies highlight how tissue detection underpins AI workflows in diagnosing oncological and infectious diseases, ensuring efficiency and precision in the WSI’s downstream analyses.
Several studies have benchmarked tissue segmentation approaches, providing valuable comparisons. For instance, Bándi et al. [12] compared traditional methods like Foreground Extraction from Structure Information (FESI) with fully convolutional networks (FCNNs) and U-Net architectures on 54 WSIs from various tissues and stains, showing DL’s superior accuracy (Jaccard  0.93 vs. 0.68 for traditional). Similarly, Riasatian et al. [13] evaluated U-Net variants with different backbones for background removal in histopathology images, achieving high performance but requiring annotations and focusing on smaller patches rather than whole-slide benchmarks.
Post-processing techniques have been explored to enhance both classical and DL methods. Marczyk et al. [14] proposed morphological post-processing to refine initial segmentations from thresholding or DL on 197 annotated WSIs, significantly improving accuracy.
Large-scale datasets are crucial for robust benchmarking. Nechaev et al. [15] introduced HISTAI, an open-source dataset with over 60,000 WSIs across diverse tissues, stains, and clinical metadata, supporting tasks like diagnostic modeling. Unlike TCGA’s cancer focus, HISTAI covers broader pathology, but both enable large-scale evaluations.
Quality control (QC) pipelines are incorporating segmentation at an increasing rate. Patil et al. [16] developed a semantic segmentation model for QC, identifying tissue, artifacts, and regions in WSIs, similar to GrandQC but emphasizing artifact detection. This aligns with our use of GrandQC as a benchmark, though our study extends to efficiency comparisons across methods.
Emerging foundation models continue to advance the field. Nicke et al. [17] presented Tissue Concepts v2, a supervised foundation model for WSIs, which was pre-trained on histopathology for tasks including segmentation. Unlike self-supervised models like Phikon-v2, its supervised nature may improve task-specific performance, but it still requires labeled data, highlighting the value of a tissue detection method.
These works underscore the evolution from classical to DL and hybrid methods, yet gaps persist in scalable, annotation-free benchmarks on cancer cohorts like TCGA.

3. Methods

3.1. Dataset and Thumbnail Generation

Our study builds directly on the resources released with the GrandQC project by Weng et al. [1]. The authors curated a heterogeneous training set of H&E-stained whole-slide images (WSIs) drawn from TCGA, all scanned on Leica GT450/AT2/CS2 and Hamamatsu S60/S360 systems at 40× (approximately 0.25 μm px 1 ). Tissue-versus-background masks for these slides were produced semi-automatically in QuPath v0.4.3.
Beyond the training material, GrandQC also open sources quality-control (QC) masks for the entire TCGA archive under a permissive licence. Leveraging this resource, we compiled a benchmark spanning nine diagnostic cohorts: ACC (Adenomas and Adenocarcinomas), BRCA 9 cancer type (Adenomas and Adenocarcinomas, Adnexal and Skin Appendage Neoplasms, Basal Cell Neoplasms, Complex Epithelial Neoplasms, Cystic, Mucinous and Serous Neoplasms, Ductal and Lobular Neoplasms, Epithelial Neoplasms, NOS, Fibroepithelial Neoplasms, Squamous Cell Neoplasms), CESC (Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma), CHOL (Cholangiocarcinoma), DLBC (Lymphoid Neoplasm Diffuse Large B-cell Lymphoma), ESCA (Esophageal Carcinoma), GBM (Gliomas), HNSC (Head and Neck Squamous Cell Carcinoma) and LIHC (Adenomas and Adenocarcinomas forliver hepatocellular carcinoma). Exact number of slides per cohort: ACC (206), BRCA (1100), CESC (279), CHOL (39), DLBC (42), ESCA (155), GBM (860), HNSC (274), LIHC (367), totaling 3322 slides.
Table 1 details the cancer types, and organs present in the dataset we used from TCGA.
For every slide, we generate an RGB thumbnail at 10 μm px 1 (1× objective), matching the native resolution at which the GrandQC tissue detector operates. Working at 10 μm px 1 trades single-cell resolution for faster processing by down-sampling the WSIs to a size sufficient for tissue-versus-background segmentation. At this scale, the thumbnails no longer capture individual cell morphology, but they still preserve tissue boundaries, structural patterns, and gross artefacts needed for reliable tissue detection. This reduction in resolution cuts the data size and I/O by roughly two orders of magnitude, enabling efficient preprocessing while still capturing macro-level cancer features.

3.2. Tissue Detector Methods

Figure 1 provides a visual overview of the four tissue detection pipelines evaluated in this study. Each method processes low-resolution thumbnails (10 μm/px) of whole-slide images (WSIs) to generate binary tissue masks, where tissue pixels are marked as 255 (white) and background as 0 (black). These masks enable focused analysis on relevant regions, which is crucial for efficient processing in large-scale cancer cohorts like TCGA. The methods were selected to represent a spectrum from classical, annotation-free techniques to deep learning, allowing a balanced comparison in terms of accuracy, speed, and resource requirements. Pseudocode for each is detailed in the Appendix A (Algorithms A1–A4), ensuring reproducibility. The figure highlights key steps: color pixel embedding and K-Means clustering (first row), Double-Pass hybrid fusing FilterGrays and downsampled K-Means (second row), Otsu thresholding on grayscale with intensity histogram (third row), and GrandQC’s UNet++ deep learning pipeline fourth row. All operate at thumbnail level to minimize computational load, as full-resolution processing would be infeasible for gigapixel WSIs in oncology workflows.
The pseudocode in the Appendix provides precise implementations, which we discuss here in the context of their operation and suitability for cancer WSIs. For instance, all methods are designed for efficiency on thumbnails, but differ in how they handle color/intensity variations common in H and E-stained cancer tissues.

3.2.1. Global Otsu Thresholding

As shown in the third row of Figure 1, Otsu thresholding [2] begins by converting the RGB thumbnail to grayscale, reducing it to a single-intensity channel (Algorithm A1, Line 2). This simplifies the problem by focusing on brightness differences, where stained tissue typically appears darker than the background. Otsu’s algorithm then computes an intensity histogram and identifies the optimal threshold T * that maximizes between-class variance (Line 3), separating the distribution into tissue and background classes. Pixels below T * are classified as tissue (set to 255 in the mask), while those above are background (set to 0; Line 4).
This method was chosen for its parameter-free nature and sub-second execution on CPUs, making it ideal as a baseline for resource-limited lab digital pathology images. It works by assuming a bimodal intensity distribution, which is often present in H and E-stained slides. Compared to clustering-based methods, Otsu is faster but relies solely on intensity, potentially less robust to variations in staining or artifacts.

3.2.2. Colour-Statistics K-Means Clustering

Illustrated in the top branch of Figure 1, this method clusters pixels using color statistics to capture both intensity and variation (Algorithm A2). The thumbnail is flattened into a pixel list scaled to [0,1] (Line 2), and for each, mean brightness μ (Line 3) and standard deviation σ (Line 4) across RGB channels are computed, forming a 2D feature matrix (Line 5). K-Means partitions these into two clusters (Line 6), selecting the one with lower average μ as tissue (Line 7). The resulting labels form a mask (set to 255 for tissue; Line 8), refined by morphological closing and opening with a 5x5 kernel to smooth edges and remove noise (Line 9).
We selected this approach because it incorporates data texture ( σ ), in addition to colour intensity, potentially handling variations in staining better than simple thresholding. It works by grouping similar pixels in feature space, adapting to the data distribution. However, processing all pixels can take seconds on large thumbnails. Compared to Otsu, it provides a more feature-rich segmentation but at higher computational cost.

3.2.3. Double-Pass Hybrid Method

The middle branch of Figure 1 depicts Double-Pass, our novel annotation-free hybrid that combines two complementary passes (Algorithm A3). The first pass (FilterGrays) sharpens the thumbnail to enhance edges (Line 2), then marks pixels as tissue if RGB differences exceed 15 levels (Line 3), rejecting uniform grays. Dilation, closing (Line 4), and small-object removal (<5000 pixels; Line 5) refine the mask. The second pass (DownKMeans) downsamples to 25% scale for speed (Line 8), applies K-Means on RGB vectors (Lines 9-11), resizes back (Line 13), and smooths (Line 14). The passes are merged via logical OR (Line 20), followed by final morphology (Line 21).
Double-Pass was developed to combine the strengths of classical methods without requiring annotations or GPUs. It works by fusing color-based artifact rejection with intensity-based detection, aiming for a balanced performance. Compared to single classical methods, it offers improved robustness through hybridization, while remaining efficient on CPUs.

3.2.4. GrandQC UNet++ Model

The bottom-right branch of Figure 1 outlines GrandQC’s UNet++ [1], a deep learning model trained on labeled data for pixel-level prediction (Algorithm A4). The thumbnail is JPEG-compressed (Line 2), tiled into 512x512 patches (Line 3), normalized (Line 5), and fed to UNet++ (Line 6), which uses encoder–decoder architecture with skip connections to learn complex patterns. Probabilities are argmaxed to labels (tissue as 0; Line 7), stitched (Line 8), and converted to a mask (255 for tissue; Line 9).
This serves as a high-accuracy benchmark, chosen for its demonstrated generalization across TCGA data. It works through supervised learning of features from diverse slides. Compared to classical methods, it requires annotations and more resources but can capture intricate patterns.
In summary, classical methods (Otsu, K-Means) prioritize speed and simplicity, Double-Pass balances efficiency and robustness, and GrandQC offers advanced accuracy—tailored choices for oncology-related needs. The pseudocode highlights these differences: e.g., Otsu’s simplicity (few lines) vs. Double-Pass’s multi-pass fusion for better handling of cancer tissue variability.
The Python implementations of these methods, including the novel Double-Pass algorithm, are available in our GitHub repository. Key libraries included scikit-image (v0.21.0), OpenCV (v4.9.0), scikit-learn (v1.5.2, for K-Means), and PyTorch (v2.5.1+cu124, for the GrandQC model). For further details on data and code availability, see the Data Availability Statement.

3.3. Evaluation Metrics

To assess the performance of the tissue detection methods, we employ three key metrics: mean Intersection over Union (mIoU), Dice score (also referred to as mean Dice Class Consistency, mDCC), and inference time. These were selected for their relevance to segmentation tasks in imbalanced datasets like WSIs, where tissue regions often occupy a small fraction of the image, and for evaluating practical efficiency in oncology workflows.
The mIoU is defined as mIoU = T P T P + F P + F N , where TP, FP, and FN represent true positives, false positives, and false negatives, respectively. It measures the overlap between predicted and ground-truth masks, penalizing both over-segmentation (extra tissue detected) and under-segmentation (missed tissue). We chose mIoU because it provides a balanced evaluation of accuracy, which is crucial for avoiding missed tumor regions (false negatives) in cancer analysis.
The Dice score is calculated as Dice = 2 T P 2 T P + F P + F N , emphasizing overlap twice as much as mIoU, making it robust to the class imbalances common in WSIs (e.g., vast background areas). In this study, we use the mean Dice across classes (mDCC) for comprehensive assessment. Dice is selected for its sensitivity to precise boundary detection, which is important in heterogeneous cancer tissues.
Inference time measures the average processing duration per slide (in seconds), evaluated on CPU for classical methods and GPU for GrandQC. This metric is essential to gauge scalability for high-volume TCGA cohorts, ensuring methods are feasible in clinical settings without excessive computational resources.

4. Results and Discussions

4.1. TCGA Dataset Results

For each slide, the predicted tissue mask P is compared to the ground-truth mask G after binarizing both to 0 , 1 and, if necessary, resizing the prediction to match the ground-truth resolution. Two overlap metrics are computed: mean Intersection over Union (mIoU), which penalizes both over- and under-segmentation, and mean Dice Class Consistency (mDCC), which assigns twice the weight to overlap regions to better handle class imbalance.
All slides in the evaluation set were down-sampled to 10 μm px 1 resolution, and masks were generated accordingly. We report mIoU, mDCC, and average per-slide inference time (in seconds). The classical methods (Otsu, K-Means, and Double-Pass) were executed on a 12-core CPU, while GrandQC was run on both CPU and an RTX-4090 GPU. GrandQC’s performance serves as an in-domain upper bound, given its training on similar TCGA data.
For each slide, we computed IoU and Dice against the ground-truth mask. Study-level mIoU/mDCC in Table 2 are simple arithmetic means over all 3322 slides. Inference time is the per-slide runtime averaged over the same 3322 slides.
Aggregate results, weighted by cohort slide counts, demonstrate GrandQC’s leadership in mIoU and mDCC with moderate GPU inference time. Our proposed Double-Pass, an annotation-free hybrid method, follows closely in accuracy while achieving the lowest CPU inference times among the high-performing methods, significantly outperforming K-Means in efficiency and Otsu in accuracy.
Table 3 provides a breakdown by cancer type, revealing performance variations across cohorts. GrandQC consistently achieves the highest mIoU and mDCC in all cohorts, attributable to its supervised training on annotated TCGA slides, which enables it to learn cohort-specific patterns like staining variations and tissue morphologies. In contrast, the annotation-free classical methods, particularly our Double-Pass, rely solely on unsupervised image statistics and heuristics, yet manage to attain competitive scores. For example, in LIHC, Double-Pass trails GrandQC closely in both mIoU and mDCC, while K-Means slightly edges it out in accuracy but at a much higher computational cost. Otsu, while generally the fastest among single classical methods, exhibits lower accuracy, particularly in complex cohorts like BRCA and GBM, where under-segmentation is more pronounced.
These per-cohort results underscore GrandQC’s superior performance due to its in-domain training, which equips it to handle diverse cancer-specific challenges, such as necrotic areas in GBM or sparse tissues in BRCA. However, the classical methods offer compelling trade-offs: they require no annotations or GPU resources, making them accessible for preliminary processing in computational resource-constrained settings. Notably, our Double-Pass stands out by striking an optimal balance, delivering mIoU and mDCC values very close to GrandQC in several cohorts, such as CHOL and HNSC, at consistently sub-0.3-second CPU inference times—faster than Otsu in many cases while surpassing it in accuracy across most cohorts, and dramatically quicker than K-Means without sacrificing much precision. This remarkable speed stems from its innovative hybrid design, which fuses complementary passes (color-based artifact rejection and efficient downsampled clustering) to mitigate the limitations of single classical approaches, avoiding the full-pixel clustering overhead seen in K-Means.
The efficiency of Double-Pass is particularly useful in digital pathology, where processing thousands of gigapixel WSIs from large cohorts like TCGA can become a bottleneck. Rapid inference on standard CPUs enables scalable preprocessing in high-volume labs, reducing overall analysis time, minimizing computational costs, and allowing pathologists and AI models to focus on clinically relevant regions without delays. This is especially important in resource-limited settings or during real-time applications, where GPU access may be unavailable, making Double-Pass a practical, high-impact solution for advancing cancer research and diagnostics.
Overall, the results highlight cohort-dependent variability, likely driven by differences in tissue density, staining intensity, and artifact prevalence. For instance, hepatic cohorts (CHOL and LIHC) yield higher scores across all methods due to denser, well-stained tissues, whereas central nervous system (GBM) and breast (BRCA) cohorts present greater challenges with sparse or heterogeneous regions. While GrandQC sets the accuracy benchmark, Double-Pass’s annotation-free nature, exceptional speed, and near-comparable segmentation quality position it as a standout alternative for scalability, effectively reducing preprocessing bottlenecks and enhancing workflow efficiency.

4.2. Qualitative Results

Qualitative comparisons (Figure 2, Figure 3 and Figure 4) illustrate Double-Pass’s smoother masks in cancer slides, minimizing missed regions.
These visualizations reveal that Double-Pass produces more contiguous masks with fewer false negatives compared to Otsu, particularly in heterogeneous tissues like ACC, where it achieves near-GrandQC quality without annotations. In non-cancer examples like sputum, methods show robustness to different stains, but cancer-specific findings indicate Double-Pass’s strength in avoiding over-inclusion of artifacts, enhancing downstream AI reliability. Limitations include potential misses in very faint areas; however, the hybrid approach mitigates this better than pure classical methods.
Double-Pass delivers deep learning–comparable accuracy while remaining lightweight and annotation-free, a key in pathology diagnosis where expert time is limited.
In cancer cohorts, performance varied: Higher scores in cohorts like LIHC, and lower scores in others like GBM. This highlights Double-Pass’s potential in diverse tissues. In BRCA, Double-Pass achieved a competitive performance. Limitations include the thumbnail resolution missing micro-details; future work could integrate multi-scale approaches.

5. Conclusions

In this study, we introduced Double-Pass, a novel, annotation-free hybrid method for efficient tissue detection in whole-slide images (WSIs). Benchmarked alongside classical Otsu and K-Means methods, as well as the deep learning-based GrandQC UNet++, Double-Pass was evaluated on over 3000 annotated TCGA slides spanning nine distinct cancer cohorts. It achieved a strong performance, with a mean Intersection over Union (mIoU) of 0.826 and an average CPU inference time of 0.203 s per slide, offering a compelling balance between segmentation accuracy and computational speed, without the need for GPU acceleration or manual annotations. In contrast to deep learning models, which require labor-intensive annotations and significant computing resources, Double-Pass combines two complementary classical strategies—gray pixel filtering and color-based clustering—to enhance robustness and detect tissue regions even in heterogeneous or faintly stained cancer slides. This design enables annotation-free segmentation that is both efficient and scalable, making it well suited for high-throughput digital pathology workflows, particularly in resource-limited environments. Furthermore, we demonstrated the broader applicability of Double-Pass through qualitative results on Ziehl–Neelsen-stained slides used in tuberculosis diagnosis. Its ability to generalize across both oncological and non-oncological tissue types highlights its flexibility as a preprocessing tool. By fusing the speed and accessibility of classical methods with segmentation quality approaching that of deep learning, Double-Pass addresses the limitations of existing approaches and emerges as a practical, lightweight solution for large-scale WSI analysis, annotation-free training pipelines, and foundational model development in computational pathology.
To contextualize our findings, we note that operating at 10 μm px 1 trades granularity for throughput and can miss thin slivers or detached micro-islands. Another limitation is that the pipeline outputs a binary tissue–background mask and does not subclass artifacts (e.g., pen marks, blur), so residual false positives may persist. Evaluation relied only on TCGA data from GrandQC masks, so broader external validation across scanners and stains would further test generalizability. These constraints motivate further research.
Looking ahead, future directions could leverage the strengths of annotation-free methods like Double-Pass to enhance training foundational models in digital pathology. Based on its proven efficiency (0.203 s/slide on CPU) and accuracy (mIoU 0.826) approaching supervised benchmarks across TCGA cohorts, integrating Double-Pass into self-supervised pre-training pipelines—such as those for models like pathology-adapted DINOv2—could enable automatic masking of non-tissue regions in large unlabelled WSI datasets, covering more cancer-relevant features without annotations. This promises improved model generalization and downstream performance on tasks like tumor segmentation and prognosis prediction, as empirical studies could confirm through comparisons on diverse cancer types, advancing scalable computational pathology workflows.

Author Contributions

Conceptualization: B.C. and A.M.F.; Methodology: B.C. and F.M.; Software: B.C.; Validation: F.M.; Formal analysis: B.C.; Investigation: B.C. and F.M.; Resources: A.M.F.; Data curation: B.C. and F.M.; Writing—original draft: B.C.; Writing—review and editing: B.C., F.M. and A.M.F.; Visualization: B.C.; Supervision: M.T. and A.M.F.; Project administration: A.M.F.; Funding acquisition: A.M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Romania’s Recovery and Resilience Plan under grant agreement 760009, project “Creation, Operationalization and Development of the National Center of Competence in the field of Cancer”, PNRR-IIIC9-2022—I5 and by the project “Romanian Hub for Artificial Intelligence-HRIA”, Smart Growth, Digitization and Financial Instruments Program, 2021–2027, MySMIS no. 334906.

Data Availability Statement

Data from GrandQC and TCGA are publicly available. The Python implementations of the algorithms described in this paper, including the Double-Pass method, are available at https://github.com/aimas-upb/efficient-tissue-detection-wsi (accessed on 31 August 2025).

Acknowledgments

We thank the GrandQC team for their open dataset and TCGA for the cohorts.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
WSIWhole-Slide Image
TCGAThe Cancer Genome Atlas
mIoUMean Intersection over Union
mDCCMean Dice Class Consistency
SCCSquamous cell carcinoma.
CNNConvolutional neural network.

Appendix A. Pseudocode for Detection Methods

Here, we provide pseudocode for the four methods in a narrative style suitable for medical doctors and practitioners. These are annotation-free (except GrandQC) and run at thumbnail level for efficiency in cancer WSIs.
Algorithm A1 Global Otsu Tissue Mask
  • Input: Thumbnail I R H × W × 3
  • Output: Mask M { 0 , 255 } H × W
1:
function GlobalOtsu(I)
2:
   G R G B 2 G r a y ( I )              ▹  convert to grayscale
3:
   T O t s u T h r e s h o l d ( G )       ▹ compute optimal threshold
4:
   M 255 × 1 { G < T }           ▹ M i , j = 255 if G i , j < T , else 0
5:
  return M
6:
end function
Algorithm A2 Colour-Statistics K-Means Tissue Mask
  • Input: RGB thumbnail I R H × W × 3
  • Output: Mask M { 0 , 255 } H × W
1:
function ColourKMeans(I)
2:
   X R e s h a p e ( I , ( H   ·   W , 3 ) ) ; scale to [ 0 , 1 ]       ▹ reshape to ( H   ·   W ) × 3 and normalise
3:
   μ M e a n ( X , axis = 1 )                        ▹ mean brightness
4:
   σ S t d D e v ( X , axis = 1 )                        ▹ RGB variability
5:
   F S t a c k ( [ μ , σ ] )                   ▹ 2D feature matrix; N = H   ·   W rows
6:
   C K M e a n s ( F , K = 2 )                      ▹ cluster into 2 groups
7:
   k arg min k   mean i k μ i                   ▹ choose darker cluster
8:
   M 255 × 1 { C = k }        ▹ H × W mask: M i , j = 255 if pixel ( i , j ) is in cluster k , else 0
9:
   M C l o s e ( M , 5 ) ;   M O p e n ( M , 5 )              ▹ morphological smoothing
10:
  return M
11:
end function
Algorithm A3 Double-Pass Tissue Detector
  • Input: RGB thumbnail I R H × W × 3
  • Output: Mask M { 0 , 255 } H × W
1:
function FilterGrays(I)
2:
   J S h a r p e n ( I )                           ▹ enhance edges
3:
   M ¬ ( | R G | 15 | R B | 15 | G B | 15 )       ▹ non-grey pixel detection
4:
   M D i l a t e ( M , 5 ) ;   M C l o s e ( M , 5 )                 ▹fill gaps & smooth
5:
   M R e m o v e S m a l l ( M , 5000 )                  ▹ remove small objects
6:
  return  M FG M
7:
end function
8:
function DownKMeans(I)
9:
    I R e s i z e ( I , 0.25 )                        downsample to 25%
10:
   X R e s h a p e ( I , ( 1 , 3 ) )                ▹ reshape downsampled image
11:
   C K M e a n s ( X , K = 2 )                    ▹cluster into 2 groups
12:
   k arg min k   mean i k intensity i              ▹ choose darker cluster
13:
   M 1 { C = k }          ▹binary mask: ( M ) i , j = 1 if pixel in cluster k , else 0
14:
   M R e s i z e ( M , ( H , W ) )                  ▹upsample to original size
15:
   M C l o s e ( M , 5 ) ;   M O p e n ( M , 5 )                  ▹ smooth mask
16:
  return  M KM M
17:
end function
18:
function Double-Pass(I)
19:
   M FG F i l t e r G r a y s ( I )                 ▹ first pass: non-grey detection
20:
   M KM D o w n K M e a n s ( I )                 second pass: colour clustering
21:
   M M FG M KM                         ▹combine masks
22:
   M C l o s e ( M , 5 ) ;   M O p e n ( M , 5 )                ▹smooth final mask
23:
  return  255 × M                       ▹convert to 0/255 mask
24:
end function
Algorithm A4 GrandQC Inference Pipeline
  • Input: RGB thumbnail I R H × W × 3
  • Output: Mask M { 0 , 255 } H × W
1:
function GrandQC(I)
2:
   J J P E G ( I , Q = 80 )                ▹ compress thumbnail
3:
   { ( P j , o j ) } T i l e ( J , 512 )            ▹ split into 512 × 512 patches
4:
  for all tiles ( P j , o j )  do
5:
     N j N o r m ImageNet ( P j )               ▹ normalise patch
6:
     P ^ j U N e t + + ( N j , amp = 1 )       ▹predict tissue probabilities
7:
     L j arg   max ( P ^ j ,   axis = class )          ▹ select most likely class
8:
    Paste ( L ^ ,   L j ,   o j )             ▹insert L j into L ^ at offset o j
9:
  end for
10:
   M 255 × 1 { L ^ = 0 }             ▹set M i , j = 255 if L ^ i , j = 0 , else 0
11:
  return M
12:
end function

References

  1. Weng, Z.; Seper, A.; Pryalukhin, A.; Mairinger, F.; Wickenhauser, C.; Bauer, M.; Glamann, L.; Blaker, H.; Lingscheidt, T.; Hulla, W.; et al. GrandQC: A comprehensive solution to quality-control problem in digital pathology. Nat. Commun. 2024, 15, 10685. [Google Scholar] [CrossRef] [PubMed]
  2. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  3. Song, Y.; Cisternino, F.; Mekke, J.M.; de Borst, G.J.; de Kleijn, D.P.V.; Pasterkamp, G.; Vink, A.; Glastonbury, C.A.; van der Laan, S.W.; Miller, C.L. An automatic entropy method to efficiently mask histology whole-slide images. Sci. Rep. 2023, 13, 4321. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, P.; Yang, L. Tissueloc: Whole-slide digital pathology image tissue localization. J. Open Source Softw. 2019, 4, 1148. [Google Scholar] [CrossRef] [PubMed]
  5. Bándi, P.; Balkenhol, M.; van Ginneken, B.; van der Laak, J.; Litjens, G. Resolution-agnostic tissue segmentation in whole-slide histopathology images with convolutional neural networks. PeerJ 2019, 7, e8242. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, D.; Khosla, A.; Gargeya, R.; Irshad, H.; Beck, A.H. Deep learning for identifying metastatic breast cancer. arXiv 2016, arXiv:1606.05718. [Google Scholar] [CrossRef]
  7. Pocock, J.; Graham, S.; Vu, Q.D.; Jahanifar, M.; Deshpande, S.; Hadjigeorghiou, G.; Shephard, A.; Bashir, R.M.S.; Bilal, M.; Lu, W.; et al. TIAToolbox as an end-to-end library for advanced tissue image analytics. Commun. Med. 2022, 2, 120. [Google Scholar] [CrossRef] [PubMed]
  8. Filiot, A.; Jacob, P.; Kain, A.M.; Saillard, C. Phikon-v2, a large and public feature extractor for biomarker prediction. arXiv 2024, arXiv:2409.09173. [Google Scholar] [CrossRef]
  9. Vorontsov, E.; Bozkurt, A.; Casson, A.; Shaikovski, G.; Zelechowski, M.; Liu, S.; Severson, K.; Zimmermann, E.; Hall, J.; Tenenholtz, N.; et al. Virchow: A million-slide digital pathology foundation model. arXiv 2024, arXiv:2309.07778. [Google Scholar] [CrossRef]
  10. Ceachi, B.; Cioplea, M.; Mustatea, P.; Dcruz, J.G.; Zurac, S.; Cauni, V.; Popp, C.; Mogodici, C.; Sticlaru, L.; Cioroianu, A.; et al. A new method of artificial-intelligence-based automatic identification of lymphovascular invasion in urothelial carcinomas. Diagnostics 2024, 14, 432. [Google Scholar] [CrossRef] [PubMed]
  11. Zurac, S.; Mogodici, C.; Poncu, T.; Trăscău, M.; Popp, C.; Nichita, L.; Cioplea, M.; Ceachi, B.; Sticlaru, L.; Cioroianu, A.; et al. A new artificial-intelligence-based method for identifying Mycobacterium tuberculosis in Ziehl–Neelsen stain on tissue. Diagnostics 2022, 12, 1484. [Google Scholar] [CrossRef] [PubMed]
  12. Bándi, P.; van de Loo, R.; Intezar, M.; Geijs, D.; Ciompi, F.; van Ginneken, B.; van der Laak, J.; Litjens, G. Comparison of different methods for tissue segmentation in histopathological whole-slide images. arXiv 2017, arXiv:1703.05990. [Google Scholar] [CrossRef]
  13. Riasatian, A.; Rasoolijaberi, M.; Babaei, M.; Tizhoosh, H.R. A comparative study of U-Net topologies for background removal in histopathology images. arXiv 2020, arXiv:2006.06531. [Google Scholar] [CrossRef]
  14. Marczyk, M.; Wrobel, A.; Merta, J.; Polanska, J. Post-processing of thresholding or deep learning methods for enhanced tissue segmentation of whole-slide histopathological images. In Proceedings of the 12th International Conference on Bioimaging, Porto, Portugal, 20–22 February 2025; pp. 229–238. [Google Scholar] [CrossRef]
  15. Nechaev, D.; Pchelnikov, A.; Ivanova, E. HISTAI: An open-source, large-scale whole-slide image dataset for computational pathology. arXiv 2025, arXiv:2505.12120. [Google Scholar]
  16. Patil, A.; Jain, G.; Diwakar, H.; Sawant, J.; Bameta, T.; Rane, S.; Sethi, A. Semantic segmentation based quality control of histopathology whole-slide images. arXiv 2025, arXiv:2410.03289. [Google Scholar]
  17. Nicke, T.; Schacherer, D.; Schäfer, J.R.; Artysh, N.; Prasse, A.; Homeyer, A.; Schenk, A.; Höfener, H.; Lotz, J. Tissue Concepts v2: A supervised foundation model for whole-slide images. arXiv 2025, arXiv:2507.05742. [Google Scholar]
Figure 1. Overview of the tissue detection methods. The flowchart illustrates the key steps for each approach, from input thumbnail to output mask, highlighting preprocessing, segmentation, and post-processing operations. Branches correspond to K-Means (first row), Double-Pass (second row), Otsu (third row), and GrandQC (fourth row), with examples of intermediate outputs for clarity.
Figure 1. Overview of the tissue detection methods. The flowchart illustrates the key steps for each approach, from input thumbnail to output mask, highlighting preprocessing, segmentation, and post-processing operations. Branches correspond to K-Means (first row), Double-Pass (second row), Otsu (third row), and GrandQC (fourth row), with examples of intermediate outputs for clarity.
Cancers 17 02918 g001
Figure 2. Visual comparison on an adrenocortical carcinoma (TCGA-ACC) thumbnail. Every panel shows the detector’s mask overlaid in transparent green; pink areas denote disagreements with the manual annotation. Otsu and Kmeans visibly returned more false-negative results than Double-Pass and GrandQC. The latter two had similar masks. Dice scores were as follows: Otsu 0.84, K-Means 0.90, Double-Pass 0.96, GrandQC 0.97.
Figure 2. Visual comparison on an adrenocortical carcinoma (TCGA-ACC) thumbnail. Every panel shows the detector’s mask overlaid in transparent green; pink areas denote disagreements with the manual annotation. Otsu and Kmeans visibly returned more false-negative results than Double-Pass and GrandQC. The latter two had similar masks. Dice scores were as follows: Otsu 0.84, K-Means 0.90, Double-Pass 0.96, GrandQC 0.97.
Cancers 17 02918 g002
Figure 3. Detectors applied on Ziehl–Neelsen sputum smear (for tasks like detecting acid-fast bacilli). Starting from the original ZN image (blue counterstain background), we generated green overlays representing detection outputs from four approaches: Otsu global thresholding, K-Means color clustering, GrandQC quality-guided selection, and a Double-Pass refinement that re-screens uncertain margins. Although the first three methods mark many relevant regions, each omits patches that can contain acid-fast bacilli. The Double-Pass procedure expands coverage to include all such candidate areas, yet still suppresses large expanses of non-tissue signal, improving downstream sensitivity without a large specificity penalty.
Figure 3. Detectors applied on Ziehl–Neelsen sputum smear (for tasks like detecting acid-fast bacilli). Starting from the original ZN image (blue counterstain background), we generated green overlays representing detection outputs from four approaches: Otsu global thresholding, K-Means color clustering, GrandQC quality-guided selection, and a Double-Pass refinement that re-screens uncertain margins. Although the first three methods mark many relevant regions, each omits patches that can contain acid-fast bacilli. The Double-Pass procedure expands coverage to include all such candidate areas, yet still suppresses large expanses of non-tissue signal, improving downstream sensitivity without a large specificity penalty.
Cancers 17 02918 g003
Figure 4. In (a), the breast cancer (BRCA) sample is composed mostly of a sparse tissue section with clusters of tumor cells in fibrous stroma. Otsu (mDCC 0.5253, mIOU 0.3562) and K-Means (mDCC 0.5293, mIOU 0.3599) overlook isolated nests, risking delayed detection of invasive components and inaccurate staging. Double-Pass (mDCC 0.6876, mIOU 0.5239) and GrandQC (mDCC 0.6951, mIOU 0.5326) provide more comprehensive coverage and avoid inaccurate staging. In (b), the glioblastoma (GBM), contains more pale, eosinophilic, irregular necrotic areas. Otsu’s thresholding (mDCC 0.6744, mIOU 0.5087) and K-Means (mDCC 0.8264, mIOU 0.7042) miss substantial necrotic regions, potentially leading to underestimation of tumor grade and misprognosis. Double-Pass (mDCC 0.9207, mIOU 0.8531) and GrandQC (mDCC 0.9292, mIOU 0.8677) better capture these features.
Figure 4. In (a), the breast cancer (BRCA) sample is composed mostly of a sparse tissue section with clusters of tumor cells in fibrous stroma. Otsu (mDCC 0.5253, mIOU 0.3562) and K-Means (mDCC 0.5293, mIOU 0.3599) overlook isolated nests, risking delayed detection of invasive components and inaccurate staging. Double-Pass (mDCC 0.6876, mIOU 0.5239) and GrandQC (mDCC 0.6951, mIOU 0.5326) provide more comprehensive coverage and avoid inaccurate staging. In (b), the glioblastoma (GBM), contains more pale, eosinophilic, irregular necrotic areas. Otsu’s thresholding (mDCC 0.6744, mIOU 0.5087) and K-Means (mDCC 0.8264, mIOU 0.7042) miss substantial necrotic regions, potentially leading to underestimation of tumor grade and misprognosis. Double-Pass (mDCC 0.9207, mIOU 0.8531) and GrandQC (mDCC 0.9292, mIOU 0.8677) better capture these features.
Cancers 17 02918 g004
Table 1. Cancer types covered in the study.
Table 1. Cancer types covered in the study.
TCGA CodeDisease TypeCancer Primary Site
ACCAdenomas and AdenocarcinomasAdrenal Gland
BRCA9 Cancer TypesBreast
CESCSCC and AdenocarcinomaCervix
CHOLCholangiocarcinomaBiliary/Liver
DLBCDiffuse Large B-cell LymphomaLymph Nodes
ESCASCC and AdenocarcinomaEsophagus
GBMGlioblastoma MultiformeCentral Nervous System
HNSCSquamous Cell CarcinomaHead and Neck
LIHCAdenomas and AdenocarcinomasLiver
Table 2. Summary of characteristics and performance metrics for all benchmarked tissue detection methods.
Table 2. Summary of characteristics and performance metrics for all benchmarked tissue detection methods.
MethodMethod TypeAnnotations Required?mIoUmDCCCPU Time (s)GPU Time (s)
Otsu (TIAToolbox)Classical thresholdingNone0.7790.8610.294
K-Means (MSTD)Unsupervised clusteringNone0.8400.8992.487
Double-Pass (ours)Hybrid (colour intensity)None0.8260.8950.203
GrandQC (CPU)Supervised deep modelYes0.8710.9242.431
GrandQC (GPU)Supervised deep modelYes0.8710.9240.580
Table 3. Tissue detection performance per cancer type.
Table 3. Tissue detection performance per cancer type.
TCGA CodeCancer Primary SiteImagesMethodmIoUmDCCTime (s)
TCGA-ACCAdrenal Gland206Otsu0.8570.9210.400
K-Means0.9120.9539.391
Double-Pass (ours)0.8910.9410.251
GrandQC (GPU)0.9210.9580.717
GrandQC (CPU)0.9210.9582.851
TCGA-BRCABreast1100Otsu0.6680.7860.273
K-Means0.7380.8341.368
Double-Pass (ours)0.8180.8940.215
GrandQC (GPU)0.8440.9100.585
GrandQC (CPU)0.8440.9102.482
TCGA-CESCCervix279Otsu0.8270.8950.217
K-Means0.8740.9241.615
Double-Pass (ours)0.7790.8600.184
GrandQC (GPU)0.8550.9120.552
GrandQC (CPU)0.8550.9122.161
TCGA-CHOLBiliary/Liver39Otsu0.9380.9670.330
K-Means0.9770.98810.162
Double-Pass (ours)0.9500.9740.258
GrandQC (GPU)0.9700.9850.749
GrandQC (CPU)0.9700.9853.053
TCGA-DLBCLymph Nodes42Otsu0.8120.8740.174
K-Means0.8490.8975.902
Double-Pass (ours)0.7980.8650.171
GrandQC (GPU)0.8520.8990.532
GrandQC (CPU)0.8520.8992.090
TCGA-ESCAEsophagus155Otsu0.8260.8960.155
K-Means0.8940.9368.301
Double-Pass (ours)0.8460.9090.197
GrandQC (GPU)0.9000.9420.552
GrandQC (CPU)0.9000.9422.259
TCGA-GBMCentral Nervous System860Otsu0.7960.8710.332
K-Means0.8590.9091.294
Double-Pass (ours)0.7840.8630.188
GrandQC (GPU)0.8480.9050.549
GrandQC (CPU)0.8480.9052.363
TCGA-HNSCHead and Neck274Otsu0.8210.8910.268
K-Means0.8940.9351.356
Double-Pass (ours)0.8550.9140.199
GrandQC (GPU)0.9040.9450.589
GrandQC (CPU)0.9040.9452.568
TCGA-LIHCLiver367Otsu0.9140.9510.350
K-Means0.9530.9722.610
Double-Pass (ours)0.9070.9490.194
GrandQC (GPU)0.9460.9710.578
GrandQC (CPU)0.9460.9712.352
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ceachi, B.; Muresan, F.; Trascau, M.; Florea, A.M. Efficient Tissue Detection in Whole-Slide Images Using Classical and Hybrid Methods: Benchmark on TCGA Cancer Cohorts. Cancers 2025, 17, 2918. https://doi.org/10.3390/cancers17172918

AMA Style

Ceachi B, Muresan F, Trascau M, Florea AM. Efficient Tissue Detection in Whole-Slide Images Using Classical and Hybrid Methods: Benchmark on TCGA Cancer Cohorts. Cancers. 2025; 17(17):2918. https://doi.org/10.3390/cancers17172918

Chicago/Turabian Style

Ceachi, Bogdan, Filip Muresan, Mihai Trascau, and Adina Magda Florea. 2025. "Efficient Tissue Detection in Whole-Slide Images Using Classical and Hybrid Methods: Benchmark on TCGA Cancer Cohorts" Cancers 17, no. 17: 2918. https://doi.org/10.3390/cancers17172918

APA Style

Ceachi, B., Muresan, F., Trascau, M., & Florea, A. M. (2025). Efficient Tissue Detection in Whole-Slide Images Using Classical and Hybrid Methods: Benchmark on TCGA Cancer Cohorts. Cancers, 17(17), 2918. https://doi.org/10.3390/cancers17172918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop