Cell Detection in Biomedical Immunohistochemical Images Using Unsupervised Segmentation and Deep Learning
Abstract
1. Introduction
- We provide the first comprehensive quantitative evaluation of the OralImmunoAnalyser software methods (OIA-RDA and OIA-EDA) using precision, recall, and -score metrics against ground-truth annotations.
- We conduct a systematic comparison of classical unsupervised segmentation methods (including graph cuts, active contours, and clustering approaches) with modern deep learning techniques (U-Net and YOLO) for cell detection in oral IHC images.
- We use a U-Net-based approach with heatmap regression and Gaussian-encoded centroids specifically adapted for robust nuclei detection in challenging IHC conditions.
- We carry out computational resource analysis of all approaches, informing practical deployment decisions and providing evidence-based recommendations for method selection in resource-constrained research settings.
2. Data
3. Methods
3.1. Unsupervised Image Segmentation
- OIA-RDA: OralImmunoAnalyser-Region Detection Algorithm is the region-based approach included in the OralImmunoAnalyser (OIA) software to detect nuclei. It combines the K-means clustering algorithm, thresholding by the multi-level Otsu method, and other image pre-processing and post-processing techniques (details in [7]). OIA-RDA has three operation modes, marked by HIGH, LOW, and WS in the OIA graphical interface. The only parameter needed by the algorithm is the minimum diameter of the cells to detect, which is established by the expert pathologists in 20 pixels for the OIDB dataset.
- OIA-EDA: The Multi-Scale Canny Filter (MSCF) is an edge-based method derived from the Canny filter [30]. It applies the Canny filter at different scales, i.e., using various values of Gaussian spread () and different values of hysteresis thresholds, followed by an edge-linking step to produce closed contours. MSCF was configured to be included in OIA with one scale () and one pair of thresholds, with the rates 0.3 and 0.7 for the low and high thresholds, respectively, denominated as OIA-EDA in this paper. The minimum diameter of the cells to detect is also set to 20 pixels for the OIDB dataset.
- FH: The Felzenszwalb–Huttenlocher algorithm [27] is selected among the graph cut models due to its efficiency and the availability of an implementation in the segmentation module of the Python scikit-image library (https://scikit-image.org/). We use the Felzenszwalb function, configured with default values (scale = 1.0 and sigma = 0.8), except for the minimum component size , which is set to 400 pixels. Since the minimum diameter of the cells provided by the experts is 20 pixels for the OIADB dataset, we assume the minimum size of the components are a square of the minimum size diameter, .
- ChV: Among the active contour models, we select the Chan–Vase algorithm [23], implemented also in the Python scikit-image library and which does not need to provide initial seeds to its application. Specifically, we use the morphological_chan_vese function to apply Morphological Active Contours without Edges, called MorphACWE, and the Morphological_geodesic_active_contour function to apply Morphological Geodesic Active Contours, called MorphGAC. Both methods are applied using default configurations (using 500 iteractions), and the original images were pre-processed with a Gaussian filter of size ( pixels) to reduce noise in the images.
- Clustering: The available clustering algorithms used are (1) the popular K-means algorithm implemented by the kmeans function in the OpenCV library (https://opencv.org/) using the original image in the RGB and Lab color spaces, designed as kmeans-RGB and kmeans-LAB, respectively; and (2) the slic function of the segmentation API of the scikit-image library, which implements the simple linear iterative clustering (SLIC) superpixels method [31], called SLIC. In the kmeans-RGB approach, the kmeans function is applied on patterns including the RGB signature of each pixel in the image using four clusters (one for each staining level of nuclei and another for the sample background). Initial centers for the clusters are taken randomly, and we use the default configuration for the remaining parameters (the algorithm stops when 10 iterations of the algorithm have run or an accuracy of epsilon = 1.0 is reached). A signature containing the three values of the channels of the Lab color space is used for the kmeans-LAB approach.
3.2. Supervised Image Segmentation Using Deep Learning
3.2.1. U-Net
3.2.2. YOLO
4. Results
4.1. Metrics of Detection Performance
4.2. Experimental Setup
4.3. Cell Detection Using Unsupervised Segmentation
4.4. Cell Detection Using Deep Learning
4.5. Computation Time
4.6. Discussion
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
GPU | Graphics Processing Unit |
RAM | Random Access Memory |
CPU | Central Processing Unit |
GB | Gigabyte |
DL | Deep Learning |
References
- Alam, M.R.; Seo, K.J.; Yim, K.; Liang, P.; Yeh, J.; Chang, C.; Chong, Y. Comparative analysis of Ki-67 labeling index morphometry using deep learning, conventional image analysis, and manual counting. Transl. Oncol. 2025, 51, 102159. [Google Scholar] [CrossRef]
- Azevedo Tosta, T.A.; Neves, L.A.; do Nascimento, M.Z. Segmentation methods of H&E-stained histological images of lymphoma: A review. Inform. Med. Unlocked 2017, 9, 35–43. [Google Scholar] [CrossRef]
- Sand, F.L.; Lindquist, S.; Aalborg, G.L.; Kjaer, S.K. The prognostic value of p53 and Ki-67 expression status in penile cancer: A systematic review and meta-analysis. Pathology 2025, 57, 276–284. [Google Scholar] [CrossRef] [PubMed]
- Rodríguez-Candela Mateos, M.; Azmat, M.; Santiago-Freijanes, P.; Galán-Moya, E.M.; Fernández-Delgado, M.; Aponte, R.B.; Mosquera, J.; Acea, B.; Cernadas, E.; Mayán, M.D. Software BreastAnalyser for the semi-automatic analysis of breast cancer immunohistochemical images. Sci. Rep. 2024, 14, 2995. [Google Scholar] [CrossRef]
- Torres, L.A.F.; Celso, D.S.G.; Defante, M.L.; Alzogaray, V.; Bearse, M.; de Melo Lopes, A.C.F.M. Ki-67 as a marker for differentiating borderline and benign phyllodes tumors of the breast: A meta-analysis and systematic review. Ann. Diagn. Pathol. 2025, 75, 152429. [Google Scholar] [CrossRef]
- Dias, E.P.; Oliveira, N.S.C.; Serra-Campos, A.O.; da Silva, A.K.F.; da Silva, L.E.; Cunha, K.S. A novel evaluation method for Ki-67 immunostaining in paraffin-embedded tissues. Virchows Arch. 2021, 479, 121–131. [Google Scholar] [CrossRef]
- Al-Tarawneh, Z.A.; Pena-Cristóbal, M.; Cernadas, E.; Suarez-Peñaranda, J.M.; Fernández-Delgado, M.; Mbaidin, A.; Gallas-Torreira, M.; Gándara-Vila, P. OralImmunoAnalyser: A software tool for immunohistochemical assessment of oral leukoplakia using image segmentation and classification models. Front. Artif. Intell. 2024, 7, 1324410. [Google Scholar] [CrossRef]
- Đokić, S.; Gazić, B.; Grčar Kuzmanov, B.; Blazina, J.; Miceska, S.; Čugura, T.; Grašič Kuhar, C.; Jeruc, J. Clinical and Analytical Validation of Two Methods for Ki-67 Scoring in Formalin Fixed and Paraffin Embedded Tissue Sections of Early Breast Cancer. Cancers 2024, 16, 1405. [Google Scholar] [CrossRef]
- Gupta, A.; Duggal, R.; Gehlot, S.; Gupta, R.; Mangal, A.; Kumar, L.; Thakkar, N.; Satpathy, D. GCTI-SN: Geometry-inspired chemical and tissue invariant stain normalization of microscopic medical images. Med. Image Anal. 2020, 65, 101788. [Google Scholar] [CrossRef]
- Gonzalez, R.; Woods, R. Digital Image Processing Global Edition; Pearson Deutschland: Munich, Germany, 2017; p. 1024. [Google Scholar]
- Brar, K.K.; Goyal, B.; Dogra, A.; Mustafa, M.A.; Majumdar, R.; Alkhayyat, A.; Kukreja, V. Image segmentation review: Theoretical background and recent advances. Inf. Fusion 2025, 114, 102608. [Google Scholar] [CrossRef]
- Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
- Xie, L.; Qi, J.; Pan, L.; Wali, S. Integrating deep convolutional neural networks with marker-controlled watershed for overlapping nuclei segmentation in histopathology images. Neurocomputing 2020, 376, 166–179. [Google Scholar] [CrossRef]
- Pina, O.; Vilaplana, V. Unsupervised Domain Adaptation for Multi-Stain Cell Detection in Breast Cancer with Transformers. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 17–18 June 2024; pp. 5066–5074. [Google Scholar] [CrossRef]
- Jardim, S.; António, J.; Mora, C. Image thresholding approaches for medical image segmentation—Short literature review. Procedia Comput. Sci. 2023, 219, 1485–1492. [Google Scholar] [CrossRef]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
- Fan, J.; Zeng, G.; Body, M.; Hacid, M.S. Seeded region growing: An extensive and comparative study. Pattern Recognit. Lett. 2005, 26, 1139–1156. [Google Scholar] [CrossRef]
- Bayá, A.E.; Larese, M.G.; Namías, R. Clustering stability for automated color image segmentation. Expert Syst. Appl. 2017, 86, 258–273. [Google Scholar] [CrossRef]
- Shao, J.; Chen, S.; Zhou, J.; Zhu, H.; Wang, Z.; Brown, M. Application of U-Net and Optimized Clustering in Medical Image Segmentation: A Review. CMES—Comput. Model. Eng. Sci. 2023, 136, 2173–2219. [Google Scholar] [CrossRef]
- Comaniciu, D.; Meer, P. Mean Shift: A Robust Approach Toward Feature Space Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef]
- Canny, J.F. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef]
- Lin, Y.C.; Tsai, Y.P.; Hung, Y.P.; Shih, Z.C. Comparison between immersion-based and toboggan-based watershed image segmentation. IEEE Trans. Image Process. 2006, 15, 632–640. [Google Scholar] [CrossRef]
- Chan, T.; Vese, L. Active Contours Without Edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef]
- Cremers, D.; Rousson, M.; Direche, R. A Review of Statistical Aproached to Level Set Segmentation: Integrating Color, Texture, Motion and Shape. Int. J. Comput. Vis. 2007, 72, 195–215. [Google Scholar] [CrossRef]
- Wang, S.; Siskind, J. Image segmentation with ratio cut. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 675–690. [Google Scholar] [CrossRef]
- Shi, J.; Malik, J. Normalized Cuts and Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar] [CrossRef]
- Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient Graph-Based Image Segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
- Ghane, N.; Vard, A.; Talebi, A.; Nematollahy, P. Segmentation of White Blood Cells from Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm. J. Med. Signals Sens. 2017, 7, 92–101. [Google Scholar]
- Gamarra, M.; Zurek, E.; Escalante, H.J.; Hurtado, L.; San-Juan-Vergara, H. Split and merge watershed: A two-step method for cell segmentation in fluorescence microscopy images. Biomed. Signal Process. Control 2019, 53, 101575. [Google Scholar] [CrossRef]
- Mbaidin, A.; Cernadas, E.; Al-Tarawneh, Z.A.; Fernández-Delgado, M.; Domínguez-Petit, R.; Rábade-Uberos, S.; Hassanat, A. MSCF: Multi-Scale Canny Filter to Recognize Cells in Microscopic Images. Sustainability 2023, 15, 13693. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
- Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object Detection in 20 Years: A Survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
- Chaudhari, S.; Malkan, N.; Momin, A.; Bonde, M. YOLO Real Time Object Detection. Int. J. Comput. Trends Technol. 2020, 68, 70–76. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
- Xu, Q.; Ma, Z.; HE, N.; Duan, W. DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation. Comput. Biol. Med. 2023, 154, 106626. [Google Scholar] [CrossRef] [PubMed]
- Monaco, S.; Bussola, N.; Buttò, S.; Sona, D.; Giobergia, F.; Jurman, G.; Xinaris, C.; Apiletti, D. AI models for automated segmentation of engineered polycystic kidney tubules. Sci. Rep. 2024, 14, 2847. [Google Scholar] [CrossRef] [PubMed]
- Wightman, R. PyTorch Image Models. 2019. Available online: https://github.com/rwightman/pytorch-image-models (accessed on 10 September 2025). [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Huang, L. Normalization Techniques in Deep Learning; Springer: Cham, Switzerland, 2022. [Google Scholar]
Method | Config. | Recall | Precision | -Score | p-Score | |||
---|---|---|---|---|---|---|---|---|
OIA-RDA | HIGH | 8.4 ± 5.3 | 87.6 ± 26.5 | 15.0 ± 8.8 | <0.05 | 49.7 | 3.4 | 0.6 |
LOW | 16.7 ± 12.6 | 90.3 ± 13.4 | 26.3 ± 16.2 | <0.05 | 59.0 | 17.7 | 7.6 | |
WS | 24.4 ± 14.1 | 87.6 ± 10.4 | 35.9 ± 15.2 | <0.05 | 63.9 | 24.0 | 15.8 | |
OIA-EDA | 44.1 ± 11.2 | 46.7 ± 18.9 | 41.0 ± 8.2 | <0.05 | 46.3 | 36.7 | 45.3 | |
FH | 34.6 ± 11.4 | 40.9 ± 13.1 | 34.5 ± 3.4 | <0.05 | 39.8 | 27.9 | 34.6 | |
ChV | MorphACWE | 12.6 ± 7.6 | 89.4 ± 9.1 | 21.4 ± 10.8 | <0.05 | 58.2 | 17.2 | 2.5 |
MorphGAC | 13 ± 8.2 | 88.3 ± 9.4 | 21.6 ± 11.5 | <0.05 | 57.7 | 17.7 | 3 | |
Clustering | kmeans-RGB | 12.2 ± 16.5 | 40.8 ± 39.5 | 17 ± 20 | <0.05 | 22.4 | 10.9 | 10.2 |
kmeans-LAB | 12.1 ± 16.5 | 40.1 ± 39.5 | 16.7 ± 19.6 | <0.05 | 22.8 | 10.3 | 10.3 | |
SLIC | 0.5 ± 0.6 | 9.7 ± 12.3 | 1 ± 1.2 | <0.05 | 0.2 | 0.2 | 0.3 | |
OIA | RDA-EDA | 49.2 ± 18 | 50.9 ± 11.7 | 46.4 ± 7.2 | <0.05 | 69.8 | 42.4 | 48.7 |
DL | U-Net | 74.5 ± 8.6 | 76.8 ± 7.1 | 75.3 ± 6.2 | — | 73.0 | 64.4 | 77 |
YOLO | 68.9 ± 8.7 | 80.9 ± 6.7 | 74.0 ± 5.9 | 0.358 | 73.1 | 63.1 | 69.2 |
Method | Config. | Time (s) | Standard Deviation |
---|---|---|---|
OIA-RDA | HIGH | 0.91 | 0.29 |
LOW | 0.90 | 0.28 | |
WS | 0.91 | 0.29 | |
OIA-EDA | 1.22 | 0.51 | |
FH | 118.74 | 52.1 | |
ChV | MorphACWE | 497.25 | 248.97 |
MorphGAC | 553.66 | 0.62 | |
Clustering | kmeans-RGB | 18.98 | 0.52 |
kmeans-LAB | 18.73 | 0.71 | |
SLIC | 60.32 | 60.14 | |
U-NET | using GPU | 0.92 | 0.20 |
using CPU | 2.90 | 1.31 | |
YOLO | using GPU | 0.87 | 0.23 |
using CPU | 2.02 | 0.95 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Al-Tarawneh, Z.A.; Tarawneh, A.S.; Mbaidin, A.; Fernández-Delgado, M.; Gándara-Vila, P.; Hassanat, A.; Cernadas, E. Cell Detection in Biomedical Immunohistochemical Images Using Unsupervised Segmentation and Deep Learning. Electronics 2025, 14, 3705. https://doi.org/10.3390/electronics14183705
Al-Tarawneh ZA, Tarawneh AS, Mbaidin A, Fernández-Delgado M, Gándara-Vila P, Hassanat A, Cernadas E. Cell Detection in Biomedical Immunohistochemical Images Using Unsupervised Segmentation and Deep Learning. Electronics. 2025; 14(18):3705. https://doi.org/10.3390/electronics14183705
Chicago/Turabian StyleAl-Tarawneh, Zakaria A., Ahmad S. Tarawneh, Almoutaz Mbaidin, Manuel Fernández-Delgado, Pilar Gándara-Vila, Ahmad Hassanat, and Eva Cernadas. 2025. "Cell Detection in Biomedical Immunohistochemical Images Using Unsupervised Segmentation and Deep Learning" Electronics 14, no. 18: 3705. https://doi.org/10.3390/electronics14183705
APA StyleAl-Tarawneh, Z. A., Tarawneh, A. S., Mbaidin, A., Fernández-Delgado, M., Gándara-Vila, P., Hassanat, A., & Cernadas, E. (2025). Cell Detection in Biomedical Immunohistochemical Images Using Unsupervised Segmentation and Deep Learning. Electronics, 14(18), 3705. https://doi.org/10.3390/electronics14183705