CellRegNet: Point Annotation-Based Cell Detection in Histopathological Images via Density Map Regression
Abstract
:1. Introduction
2. Related Works
2.1. Regression-Based Cell Detection
2.2. Network Architectures for Density Map Regression
3. Method
3.1. Model Architecture Overview
3.2. Hybrid CNN/Transformer Encoder
3.3. Feature Bridge
3.4. Global Context-Guided Feature Selection
3.5. Convolutional Decoder
3.6. Training Objective
3.6.1. Ground Truth Generation
3.6.2. Loss Function
3.7. Inference
4. Experiments
4.1. Datasets
4.1.1. BCData Dataset
4.1.2. EndoNuke Dataset
4.1.3. MBM Dataset
4.2. Evaluation Metrics
4.2.1. Matching Predictions with Ground Truth
- Each predicted cell center can be matched to at most one GT point.
- Each GT point can be matched to at most one predicted cell center.
- Predicted cell centers that do not match any GT points are considered false positives.
- Remaining unmatched GT points are considered false negatives.
Algorithm 1 Matching predicted cell centers to ground truth points |
|
4.2.2. Computing Per-Class Metrics
4.2.3. Summarizing Metrics for Overall Performance
4.3. Experimental Setup
5. Results and Discussion
5.1. Performance Comparison
5.2. Ablation Study
- Base Model: Includes the hybrid CNN/Transformer encoder and convolutional decoder described in Section 3.2 and Section 3.5, using identity shortcuts as horizontal skip connections at the three deepest levels of the encoder.
- Feature Bridge Model: Enhances the base model with feature bridges as horizontal skip connections, as described in Section 3.3.
- CellRegNet Model: Incorporates the global context-guided feature selection (GCFS) module described in Section 3.4 to select the most pertinent local features based on global information.
5.3. Comparison of Loss Functions
5.4. Qualitative Results
5.5. Discussion
6. Conclusions
- We propose CellRegNet, a novel hybrid CNN/Transformer model for accurate cell detection in histopathological images using point annotations. CellRegNet effectively captures and integrates multi-scale visual cues, addressing the complexity of cellular structures in histopathological tissues.
- We introduce feature bridges as horizontal skip connections, which enlarge the receptive field and recalibrate feature maps. This innovation enhances the model’s ability to capture and leverage information across various scales.
- We design global context-guided feature selection (GCFS) blocks that leverage cross-attention mechanisms. These blocks enable the model to select the most informative local features guided by global context, significantly improving cell detection accuracy.
- We propose a contrastive regularization loss that incorporates spatial distribution priors of cells, enhancing the distinction between predicted density maps of different cell types and reducing false positives in multi-class cell detection.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hosseini, M.S.; Bejnordi, B.E.; Trinh, V.Q.H.; Chan, L.; Hasan, D.; Li, X.; Yang, S.; Kim, T.; Zhang, H.; Wu, T.; et al. Computational pathology: A survey review and the way forward. J. Pathol. Inform. 2024, 15, 100357. [Google Scholar] [CrossRef] [PubMed]
- Van der Laak, J.; Litjens, G.; Ciompi, F. Deep learning in histopathology: The path to the clinic. Nat. Med. 2021, 27, 775–784. [Google Scholar] [CrossRef] [PubMed]
- Pantanowitz, L.; Sharma, A.; Carter, A.B.; Kurc, T.; Sussman, A.; Saltz, J. Twenty years of digital pathology: An overview of the road travelled, what is on the horizon, and the emergence of vendor-neutral archives. J. Pathol. Inform. 2018, 9, 40. [Google Scholar] [CrossRef] [PubMed]
- Wang, D.; Khosla, A.; Gargeya, R.; Irshad, H.; Beck, A.H. Deep learning for identifying metastatic breast cancer. arXiv 2016, arXiv:1606.05718. [Google Scholar]
- Xu, H.; Usuyama, N.; Bagga, J.; Zhang, S.; Rao, R.; Naumann, T.; Wong, C.; Gero, Z.; González, J.; Gu, Y.; et al. A whole-slide foundation model for digital pathology from real-world data. Nature 2024, 630, 181–188. [Google Scholar] [CrossRef] [PubMed]
- Chen, R.J.; Ding, T.; Lu, M.Y.; Williamson, D.F.; Jaume, G.; Song, A.H.; Chen, B.; Zhang, A.; Shao, D.; Shaban, M.; et al. Towards a general-purpose foundation model for computational pathology. Nat. Med. 2024, 30, 850–862. [Google Scholar] [CrossRef] [PubMed]
- Ushakov, E.; Naumov, A.; Fomberg, V.; Vishnyakova, P.; Asaturova, A.; Badlaeva, A.; Tregubova, A.; Karpulevich, E.; Sukhikh, G.; Fatkhudinov, T. EndoNet: A Model for the Automatic Calculation of H-Score on Histological Slides. Informatics 2023, 10, 90. [Google Scholar] [CrossRef]
- Huang, Z.; Ding, Y.; Song, G.; Wang, L.; Geng, R.; He, H.; Du, S.; Liu, X.; Tian, Y.; Liang, Y.; et al. Bcdata: A large-scale dataset and benchmark for cell detection and counting. In Proceedings, Part V 23, Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, 4–8 October 2020; Springer: Cham, Switzerland, 2020; pp. 289–298. [Google Scholar]
- Srinidhi, C.L.; Ciga, O.; Martel, A.L. Deep neural network models for computational histopathology: A survey. Med. Image Anal. 2021, 67, 101813. [Google Scholar] [CrossRef] [PubMed]
- Cireşan, D.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Mitosis detection in breast cancer histology images with deep neural networks. In Proceedings, Part II 16, Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013: 16th International Conference, Nagoya, Japan, 22–26 September 2013; Springer: Cham, Switzerland, 2013; pp. 411–418. [Google Scholar]
- Chen, H.; Dou, Q.; Wang, X.; Qin, J.; Heng, P. Mitosis detection in breast cancer histology images via deep cascaded networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
- Rao, S. Mitos-rcnn: A novel approach to mitotic figure detection in breast cancer histopathology images using region based convolutional neural networks. arXiv 2018, arXiv:1807.01788. [Google Scholar]
- Lv, G.; Wen, K.; Wu, Z.; Jin, X.; An, H.; He, J. Nuclei R-CNN: Improve mask R-CNN for nuclei segmentation. In Proceedings of the 2019 IEEE 2nd International Conference on Information Communication and Signal Processing (ICICSP), Weihai, China, 28–30 September 2019; pp. 357–362. [Google Scholar]
- Kainz, P.; Urschler, M.; Schulter, S.; Wohlhart, P.; Lepetit, V. You should use regression to detect cells. In Proceedings, Part III 18, Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 276–283. [Google Scholar]
- Guo, Y.; Stein, J.; Wu, G.; Krishnamurthy, A. Sau-net: A universal deep network for cell counting. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, Niagara Falls, NY, USA, 7–10 September 2019; pp. 299–306. [Google Scholar]
- Li, Y.; Zhang, X.; Chen, D. Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1091–1100. [Google Scholar]
- Naumov, A.; Ushakov, E.; Ivanov, A.; Midiber, K.; Khovanskaya, T.; Konyukova, A.; Vishnyakova, P.; Nora, S.; Mikhaleva, L.; Fatkhudinov, T.; et al. EndoNuke: Nuclei detection dataset for estrogen and progesterone stained IHC endometrium scans. Data 2022, 7, 75. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhou, D.; Chen, S.; Gao, S.; Ma, Y. Single-image crowd counting via multi-column convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 589–597. [Google Scholar]
- Sirinukunwattana, K.; Raza, S.E.A.; Tsang, Y.W.; Snead, D.R.; Cree, I.A.; Rajpoot, N.M. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1196–1206. [Google Scholar] [CrossRef] [PubMed]
- Xie, Y.; Xing, F.; Shi, X.; Kong, X.; Su, H.; Yang, L. Efficient and robust cell detection: A structured regression approach. Med. Image Anal. 2018, 44, 245–254. [Google Scholar] [CrossRef] [PubMed]
- Qu, H.; Wu, P.; Huang, Q.; Yi, J.; Yan, Z.; Li, K.; Riedlinger, G.M.; De, S.; Zhang, S.; Metaxas, D.N. Weakly supervised deep nuclei segmentation using partial points annotation in histopathology images. IEEE Trans. Med. Imaging 2020, 39, 3655–3666. [Google Scholar] [CrossRef] [PubMed]
- Liang, D.; Xu, W.; Zhu, Y.; Zhou, Y. Focal inverse distance transform maps for crowd localization. IEEE Trans. Multimed. 2022, 25, 6040–6052. [Google Scholar] [CrossRef]
- Li, B.; Chen, J.; Yi, H.; Feng, M.; Yang, Y.; Zhu, Q.; Bu, H. Exponential distance transform maps for cell localization. Eng. Appl. Artif. Intell. 2024, 132, 107948. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Wang, P.; Chen, P.; Yuan, Y.; Liu, D.; Huang, Z.; Hou, X.; Cottrell, G. Understanding convolution for semantic segmentation. In Proceedings of the 2018 IEEE winter conference on applications of computer vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1451–1460. [Google Scholar]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Chen, J.; Li, B.; Feng, M.; Yang, Y.; Zhu, Q.; Bu, H. Difference-deformable convolution with pseudo scale instance map for cell localization. IEEE J. Biomed. Health Inform. 2024, 28, 355–366. [Google Scholar] [CrossRef] [PubMed]
- Bai, S.; He, Z.; Qiao, Y.; Hu, H.; Wu, W.; Yan, J. Adaptive dilated network with self-correction supervision for counting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 4594–4603. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings, Part III 18, Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Li, J.; Wu, J.; Qi, J.; Zhang, M.; Cui, Z. PGC-Net: A Novel Encoder-Decoder Network with Path Gradient Flow Control for Cell Counting. IEEE Access 2024, 12, 68847–68856. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
- Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 574–584. [Google Scholar]
- Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In Proceedings of the International MICCAI Brainlesion Workshop; Springer: Cham, Switzerland, 2021; pp. 272–284. [Google Scholar]
- He, Y.; Nath, V.; Yang, D.; Tang, Y.; Myronenko, A.; Xu, D. Swinunetr-v2: Stronger swin transformers with stagewise convolutions for 3d medical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2023; pp. 416–426. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1106–1114. [Google Scholar] [CrossRef]
- Islam, M.A.; Jia, S.; Bruce, N.D. How much position information do convolutional neural networks encode? arXiv 2020, arXiv:2001.08248. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations, Virtual, 3–7 May 2021. [Google Scholar]
- Ding, X.; Zhang, X.; Han, J.; Ding, G. Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11963–11975. [Google Scholar]
- Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; Zhang, L. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, Montreal, BC, Canada, 11–17 October 2021; pp. 22–31. [Google Scholar]
- Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]
- Woo, S.; Debnath, S.; Hu, R.; Chen, X.; Liu, Z.; Kweon, I.S.; Xie, S. Convnext v2: Co-designing and scaling convnets with masked autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16133–16142. [Google Scholar]
- Cao, X.; Wang, Z.; Zhao, Y.; Su, F. Scale aggregation network for accurate and efficient crowd counting. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 734–750. [Google Scholar]
- Paul Cohen, J.; Boucher, G.; Glastonbury, C.A.; Lo, H.Z.; Bengio, Y. Count-ception: Counting by fully convolutional redundant counting. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 18–26. [Google Scholar]
- Vetvicka, V.; Fiala, L.; Garzon, S.; Buzzaccarini, G.; Terzic, M.; Laganà, A.S. Endometriosis and gynaecological cancers: Molecular insights behind a complex machinery. Menopause Rev. Menopauzalny 2021, 20, 201–206. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8024–8035. [Google Scholar]
Method | Parameters |
---|---|
Random Crop | Crop size = 256 |
Random Flip | Randomly chosen from {not applied, horizontal, vertical} |
Random Rotation | Randomly chosen from {0°, 90°, 180°, 270°} |
Random Color Jitter | Probability = 0.5, Brightness = 0.2, |
Contrast = 0.2, Saturation = 0.1, Hue = 0.05 |
Method | Positive | Negative | Mean | ||||||
---|---|---|---|---|---|---|---|---|---|
P (%) |
R (%) |
F1 (%) |
P (%) |
R (%) |
F1 (%) |
P (%) |
R (%) |
F1 (%) | |
U-Net [31] | 84.49 | 86.43 | 85.44 | 84.31 | 82.88 | 83.59 | 84.40 | 84.65 | 84.52 |
UNETR [34] | 84.49 | 85.18 | 84.83 | 81.94 | 82.39 | 82.16 | 83.21 | 83.78 | 83.50 |
SAU-Net [15] | 84.15 | 88.03 | 86.05 | 82.83 | 85.66 | 84.22 | 83.49 | 86.84 | 85.13 |
U-CSRNet [8] | 84.68 | 87.65 | 86.14 | 85.89 | 84.05 | 84.96 | 85.29 | 85.85 | 85.55 |
HRNet [28] | 83.95 | 87.99 | 85.93 | 85.08 | 84.14 | 84.60 | 84.51 | 86.07 | 85.27 |
DCLNet [29] | 84.24 | 88.24 | 86.19 | 84.10 | 85.25 | 84.67 | 84.17 | 86.74 | 85.43 |
Swin UNETR [35] | 83.93 | 87.90 | 85.87 | 83.30 | 86.08 | 84.67 | 83.62 | 86.99 | 85.27 |
Swin UNETR V2 [36] | 85.09 | 87.94 | 86.50 | 84.83 | 85.63 | 85.23 | 84.96 | 86.79 | 85.86 |
PGC-Net [32] | 85.15 | 87.51 | 86.31 | 82.59 | 87.16 | 84.81 | 83.87 | 87.33 | 85.56 |
Proposed CellRegNet | 86.37 | 87.12 | 86.74 | 84.11 | 88.03 | 86.03 | 85.24 | 87.58 | 86.38 |
Method | Stroma | Epithelium | Mean | ||||||
---|---|---|---|---|---|---|---|---|---|
P (%) |
R (%) |
F1 (%) |
P (%) |
R (%) |
F1 (%) |
P (%) |
R (%) |
F1 (%) | |
U-Net [31] | 83.25 | 90.58 | 86.76 | 72.09 | 79.25 | 75.50 | 77.67 | 84.92 | 81.13 |
UNETR [34] | 80.92 | 88.49 | 84.53 | 68.49 | 68.67 | 68.58 | 74.70 | 78.58 | 76.56 |
SAU-Net [15] | 84.51 | 90.34 | 87.33 | 75.49 | 79.85 | 77.61 | 80.00 | 85.09 | 82.47 |
U-CSRNet [8] | 85.78 | 90.46 | 88.06 | 82.43 | 79.04 | 80.70 | 84.10 | 84.75 | 84.38 |
HRNet [28] | 86.00 | 91.12 | 88.49 | 80.08 | 82.94 | 81.48 | 83.04 | 87.03 | 84.99 |
DCLNet [29] | 84.22 | 92.13 | 88.00 | 81.12 | 83.82 | 82.45 | 82.67 | 87.97 | 85.22 |
Swin UNETR [35] | 84.88 | 90.97 | 87.82 | 79.24 | 81.33 | 80.27 | 82.06 | 86.15 | 84.05 |
Swin UNETR V2 [36] | 85.50 | 91.30 | 88.30 | 80.12 | 83.19 | 81.63 | 82.81 | 87.24 | 84.96 |
PGC-Net [32] | 84.80 | 90.80 | 87.70 | 79.85 | 80.04 | 79.94 | 82.32 | 85.42 | 83.82 |
Proposed CellRegNet | 86.51 | 90.73 | 88.57 | 84.28 | 80.88 | 82.54 | 85.39 | 85.81 | 85.56 |
Method | Bone Marrow Cells | ||
---|---|---|---|
Precision | Recall | F1 | |
U-Net | 87.18 | 95.39 | 91.10 |
UNETR | 88.96 | 95.20 | 91.97 |
SAU-Net | 92.72 | 94.24 | 93.47 |
U-CSRNet | 93.24 | 94.14 | 93.69 |
HRNet | 90.42 | 96.16 | 93.20 |
DCLNet | 89.77 | 96.06 | 92.81 |
Swin UNETR | 90.87 | 95.58 | 93.16 |
Swin UNETR V2 | 91.23 | 95.97 | 93.54 |
PGC-Net | 92.65 | 94.43 | 93.53 |
Proposed CellRegNet | 93.19 | 94.62 | 93.90 |
Method | # Params (M) | # MACs (G) |
---|---|---|
U-Net | 0.66 | 0.71 |
UNETR | 87.71 | 4.71 |
SAU-Net | 2.26 | 10.72 |
U-CSRNet | 10.13 | 11.42 |
HRNet | 72.49 | 95.56 |
DCLNet | 68.70 | 35.70 |
Swin UNETR | 1.59 | 0.95 |
Swin UNETR V2 | 7.18 | 4.41 |
PGC-Net | 2.27 | 11.32 |
CellRegNet | 8.87 | 4.20 |
Component | Positive | Negative | Mean | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Base | Bridge | GCFS |
P (%) |
R (%) |
F1 (%) |
P (%) |
R (%) |
F1 (%) |
P (%) |
R (%) |
F1 (%) |
✓ | 86.97 | 85.37 | 86.16 | 83.86 | 87.04 | 85.42 | 85.41 | 86.21 | 85.79 | ||
✓ | ✓ | 85.55 | 87.32 | 86.43 | 84.24 | 86.93 | 85.57 | 84.90 | 87.12 | 86.00 | |
✓ | ✓ | ✓ | 86.37 | 87.12 | 86.74 | 84.11 | 88.03 | 86.03 | 85.24 | 87.58 | 86.38 |
Loss Function | Positive | Negative | Mean | |||||||
---|---|---|---|---|---|---|---|---|---|---|
P (%) | R (%) | F1 (%) | P (%) | R (%) | F1 (%) | P (%) | R (%) | F1 (%) | ||
✓ | 85.64 | 87.66 | 86.64 | 82.14 | 89.44 | 85.63 | 83.89 | 88.55 | 86.14 | |
✓ | ✓ | 86.37 | 87.12 | 86.74 | 84.11 | 88.03 | 86.03 | 85.24 | 87.58 | 86.38 |
Loss Function | Positive | Negative | Mean | |||||||
---|---|---|---|---|---|---|---|---|---|---|
P (%) | R (%) | F1 (%) | P (%) | R (%) | F1 (%) | P (%) | R (%) | F1 (%) | ||
✓ | 85.76 | 91.24 | 88.41 | 81.55 | 82.79 | 82.16 | 83.65 | 87.01 | 85.29 | |
✓ | ✓ | 86.51 | 90.73 | 88.57 | 84.28 | 80.88 | 82.54 | 85.39 | 85.81 | 85.56 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jin, X.; An, H.; Chi, M. CellRegNet: Point Annotation-Based Cell Detection in Histopathological Images via Density Map Regression. Bioengineering 2024, 11, 814. https://doi.org/10.3390/bioengineering11080814
Jin X, An H, Chi M. CellRegNet: Point Annotation-Based Cell Detection in Histopathological Images via Density Map Regression. Bioengineering. 2024; 11(8):814. https://doi.org/10.3390/bioengineering11080814
Chicago/Turabian StyleJin, Xu, Hong An, and Mengxian Chi. 2024. "CellRegNet: Point Annotation-Based Cell Detection in Histopathological Images via Density Map Regression" Bioengineering 11, no. 8: 814. https://doi.org/10.3390/bioengineering11080814
APA StyleJin, X., An, H., & Chi, M. (2024). CellRegNet: Point Annotation-Based Cell Detection in Histopathological Images via Density Map Regression. Bioengineering, 11(8), 814. https://doi.org/10.3390/bioengineering11080814