Predicting Very Early-Stage Breast Cancer in BI-RADS 3 Lesions of Large Population with Deep Learning
Abstract
1. Introduction
- (1)
- We developed a DL model for predicting very early-stage breast cancer in ultrasound (US) images and proposed a novel transfer learning approach to enhance the model’s performance.
- (2)
- We employed multicenter datasets to validate the model’s cross-center generalization ability across different center-specific datasets.
- (3)
- We conducted a comparison of our model’s diagnostic capabilities for early-stage breast cancer with those of six experienced radiologists using an external test dataset, affirming its clinical value. This comprehensive evaluation highlights the practical utility of the proposed model.
2. Materials and Methods
2.1. Study Population
2.2. Imaging Interpretation and Lesion Segmentation
2.3. Model Development
2.4. Comparison of Predictive Models
2.5. Performance Comparison Between the EBCV Model and Radiologists
2.6. The Interpretability of Model
2.7. Evaluation Metrics and Statistical Analysis
3. Results
3.1. Patient Characteristics
3.2. Performance Comparison with Different Backbones
3.3. Performance of the Proposed Model
3.4. Comparison of Diagnostic Efficiency Between Our Model and Radiologists
3.5. Interpretation of the DL Model
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
BI-RADS | Breast imaging reporting and data system |
US | Ultrasound |
MRI | Magnetic resonance imaging |
DL | Deep learning |
CNN | Convolutional neural network |
CTL | Conventional transfer learning |
KD | Knowledge distillation |
EBCV | Early breast cancer viewer |
ROC | Receiver operating characteristic curve |
NPV | Negative predictive value |
PPV | Positive predictive value |
AUC | Area under the receiver operating characteristic curve |
SEN | Sensitivity |
SPE | Specificity |
ACC | Accuracy |
CI | Confidence interval |
Appendix A
Appendix A.1. The Development of the Teacher Model
Appendix A.2. The Data Pre-Process and Implementation Details
Appendix A.3. The Detail of Transfer Learning Combining Knowledge Distillation
Appendix A.4. The Details of Grad-Cam
Devices | Datasets from SW | Dataset from TS | |
---|---|---|---|
Philips EPIQ7 | 434 | 218 | - |
GE E9 | 260 | 130 | 242 |
Siemens | 86 | 42 | - |
Mindray R9 | 88 | 46 | - |
Philips EPIQ5 | - | - | 158 |
References
- Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global Cancer Statistics 2018: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [PubMed]
- World Health Organization. WHO Report on Cancer: Setting Priorities, Investing Wisely and Providing Care for All; World Health Organization: Geneva, Switzerland, 2020; ISBN 978-92-4-000129-9. [Google Scholar]
- Fitzgerald, R.C.; Antoniou, A.C.; Fruk, L.; Rosenfeld, N. The Future of Early Cancer Detection. Nat. Med. 2022, 28, 666–677. [Google Scholar] [CrossRef] [PubMed]
- Hou, X.-Y.; Niu, H.-Y.; Huang, X.-L.; Gao, Y. Correlation of Breast Ultrasound Classifications with Breast Cancer in Chinese Women. Ultrasound Med. Biol. 2016, 42, 2616–2621. [Google Scholar] [CrossRef] [PubMed]
- Shen, S.; Zhou, Y.; Xu, Y.; Zhang, B.; Duan, X.; Huang, R.; Li, B.; Shi, Y.; Shao, Z.; Liao, H.; et al. A Multi-Centre Randomised Trial Comparing Ultrasound vs. Mammography for Screening Breast Cancer in High-Risk Chinese Women. Br. J. Cancer 2015, 112, 998–1004. [Google Scholar] [CrossRef]
- Qian, X.; Pei, J.; Zheng, H.; Xie, X.; Yan, L.; Zhang, H.; Han, C.; Gao, X.; Zhang, H.; Zheng, W.; et al. Prospective Assessment of Breast Cancer Risk from Multimodal Multiview Ultrasound Images via Clinically Applicable Deep Learning. Nat. Biomed. Eng. 2021, 5, 522–532. [Google Scholar] [CrossRef]
- Sun, Q.; Lin, X.; Zhao, Y.; Li, L.; Yan, K.; Liang, D.; Sun, D.; Li, Z.-C. Deep Learning vs. Radiomics for Predicting Axillary Lymph Node Metastasis of Breast Cancer Using Ultrasound Images: Don’t Forget the Peritumoral Region. Front. Oncol. 2020, 10, 53. [Google Scholar] [CrossRef]
- Magny, S.J.; Shikhman, R.; Keppke, A.L. Breast Imaging Reporting and Data System. In StatPearls; StatPearls Publishing: Treasure Island, FL, USA, 2023. [Google Scholar]
- Baum, J.K.; Hanna, L.G.; Acharyya, S.; Mahoney, M.C.; Conant, E.F.; Bassett, L.W.; Pisano, E.D. Use of BI-RADS 3–Probably Benign Category in the American College of Radiology Imaging Network Digital Mammographic Imaging Screening Trial. Radiology 2011, 260, 61–67. [Google Scholar] [CrossRef]
- Ha, S.M.; Chae, E.Y.; Cha, J.H.; Shin, H.J.; Choi, W.J.; Kim, H.H. Growing BI-RADS Category 3 Lesions on Follow-up Breast Ultrasound: Malignancy Rates and Worrisome Features. Br. J. Radiol. 2018, 91, 20170787. [Google Scholar] [CrossRef]
- Berg, W.A.; Berg, J.M.; Sickles, E.A.; Burnside, E.S.; Zuley, M.L.; Rosenberg, R.D.; Lee, C.S. Cancer Yield and Patterns of Follow-up for BI-RADS Category 3 After Screening Mammography Recall in the National Mammography Database. Radiology 2020, 296, 32–41. Available online: https://pubs.rsna.org/doi/full/10.1148/radiol.2020192641 (accessed on 3 September 2023). [CrossRef]
- Mendez, A.; Cabanillas, F.; Echenique, M.; Malekshamran, K.; Perez, I.; Ramos, E. Mammographic Features and Correlation with Biopsy Findings Using 11-Gauge Stereotactic Vacuum-Assisted Breast Biopsy (SVABB). Ann. Oncol. 2004, 15, 450–454. [Google Scholar] [CrossRef]
- van Sloun, R.J.G.; Cohen, R.; Eldar, Y.C. Deep Learning in Ultrasound Imaging. Proc. IEEE 2020, 108, 11–29. [Google Scholar] [CrossRef]
- Yuan, X.; Xie, L.; Abouelenien, M. A Regularized Ensemble Framework of Deep Learning for Cancer Detection from Multi-Class, Imbalanced Training Data. Pattern Recognit. 2018, 77, 160–172. [Google Scholar] [CrossRef]
- Ribli, D.; Horváth, A.; Unger, Z.; Pollner, P.; Csabai, I. Detecting and Classifying Lesions in Mammograms with Deep Learning. Sci. Rep. 2018, 8, 4165. [Google Scholar] [CrossRef]
- Adachi, M.; Fujioka, T.; Mori, M.; Kubota, K.; Kikuchi, Y.; Xiaotong, W.; Oyama, J.; Kimura, K.; Oda, G.; Nakagawa, T.; et al. Detection and Diagnosis of Breast Cancer Using Artificial Intelligence Based Assessment of Maximum Intensity Projection Dynamic Contrast-Enhanced Magnetic Resonance Images. Diagnostics 2020, 10, 330. [Google Scholar] [CrossRef]
- Wu, Y.; Wu, J.; Dou, Y.; Rubert, N.; Wang, Y.; Deng, J. A Deep Learning Fusion Model with Evidence-Based Confidence Level Analysis for Differentiation of Malignant and Benign Breast Tumors Using Dynamic Contrast Enhanced MRI. Biomed. Signal Process. Control 2022, 72, 103319. Available online: https://www.sciencedirect.com/science/article/abs/pii/S1746809421009162 (accessed on 17 February 2025). [CrossRef]
- Becker, A.S.; Mueller, M.; Stoffel, E.; Marcon, M.; Ghafoor, S.; Boss, A. Classification of Breast Cancer in Ultrasound Imaging Using a Generic Deep Learning Analysis Software: A Pilot Study. Br. J. Radiol. 2018, 91, 20170576. [Google Scholar] [CrossRef]
- Ciritsis, A.; Rossi, C.; Eberhard, M.; Marcon, M.; Becker, A.S.; Boss, A. Automatic Classification of Ultrasound Breast Lesions Using a Deep Convolutional Neural Network Mimicking Human Decision-Making. Eur. Radiol. 2019, 29, 5458–5468. [Google Scholar] [CrossRef]
- Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Werneck Krauss Silva, V.; Busam, K.J.; Brogi, E.; Reuter, V.E.; Klimstra, D.S.; Fuchs, T.J. Clinical-Grade Computational Pathology Using Weakly Supervised Deep Learning on Whole Slide Images. Nat. Med. 2019, 25, 1301–1309. [Google Scholar] [CrossRef]
- Bassett, L.; Winchester, D.P.; Caplan, R.B.; Dershaw, D.D.; Dowlatshahi, K.; Evans, W.P.; Fajardo, L.L.; Fitzgibbons, P.L.; Henson, D.E.; Hutter, R.V.P.; et al. Stereotactic Core-Needle Biopsy of the Breast: A Report of the Joint Task Force of the American College of Radiology, American College of Surgeons, and College of American Pathologists. Breast J. 1997, 3, 317–330. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1524-4741.1997.tb00188.x (accessed on 3 September 2023). [CrossRef]
- Pang, J.; Chen, K.; Shi, J.; Feng, H.; Ouyang, W.; Lin, D. Libra R-CNN: Towards Balanced Learning for Object Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 821–830. [Google Scholar]
- Hinton, G.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. arXiv 2015, arXiv:1503.02531. [Google Scholar] [CrossRef]
- Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of Breast Ultrasound Images. Data Brief 2020, 28, 104863. [Google Scholar] [CrossRef] [PubMed]
- Park, G.E.; Kim, S.H.; Lee, J.M.; Kang, B.J.; Chae, B.J. Comparison of Positive Predictive Values of Categorization of Suspicious Calcifications Using the 4th and 5th Editions of BI-RADS. Am. J. Roentgenol. 2019, 213, 710–715. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Duan, C.; Cao, Y.; Zhou, L.; Tan, M.T.; Chen, P. A Novel Nonparametric Confidence Interval for Differences of Proportions for Correlated Binary Data. Stat. Methods Med. Res. 2018, 27, 2249–2263. [Google Scholar] [CrossRef] [PubMed]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- del Cura, J.L.; Elizagaray, E.; Zabala, R.; Legórburu, A.; Grande, D. The Use of Unenhanced Doppler Sonography in the Evaluation of Solid Breast Lesions. Am. J. Roentgenol. 2005, 184, 1788–1794. Available online: https://ajronline.org/doi/full/10.2214/ajr.184.6.01841788 (accessed on 19 February 2025). [CrossRef]
- Guo, R.; Lu, G.; Qin, B.; Fei, B. Ultrasound Imaging Technologies for Breast Cancer Detection and Management: A Review. Ultrasound Med. Biol. 2018, 44, 37–70. [Google Scholar] [CrossRef]
- Voduc, K.D.; Cheang, M.C.U.; Tyldesley, S.; Gelmon, K.; Nielsen, T.O.; Kennecke, H. Breast Cancer Subtypes and the Risk of Local and Regional Relapse. J. Clin. Oncol. 2010, 28, 1684–1691. [Google Scholar] [CrossRef] [PubMed]
- Yala, A.; Schuster, T.; Miles, R.; Barzilay, R.; Lehman, C. A Deep Learning Model to Triage Screening Mammograms: A Simulation Study. Radiology 2019, 293, 38–46. [Google Scholar] [CrossRef]
- Lee, J.H.; Lee, D.; Lu, M.T.; Raghu, V.K.; Park, C.M.; Goo, J.M.; Choi, S.H.; Kim, H. Deep Learning to Optimize Candidate Selection for Lung Cancer CT Screening: Advancing the 2021 USPSTF Recommendations. Radiology 2022, 305, 209–218. [Google Scholar] [CrossRef]
- Zhao, L.; Bao, J.; Qiao, X.; Jin, P.; Ji, Y.; Li, Z.; Zhang, J.; Su, Y.; Ji, L.; Shen, J.; et al. Predicting Clinically Significant Prostate Cancer with a Deep Learning Approach: A Multicentre Retrospective Study. Eur. J. Nucl. Med. Mol. Imaging 2023, 50, 727–741. [Google Scholar] [CrossRef]
- Gu, Y.; Xu, W.; Lin, B.; An, X.; Tian, J.; Ran, H.; Ren, W.; Chang, C.; Yuan, J.; Kang, C.; et al. Deep Learning Based on Ultrasound Images Assists Breast Lesion Diagnosis in China: A Multicenter Diagnostic Study. Insights Imaging 2022, 13, 124. [Google Scholar] [CrossRef] [PubMed]
- Qi, T.H.; Hian, O.H.; Kumaran, A.M.; Tan, T.J.; Cong, T.R.Y.; Su-Xin, G.L.; Lim, E.H.; Ng, R.; Yeo, M.C.R.; Tching, F.L.L.W.; et al. Multi-Center Evaluation of Artificial Intelligent Imaging and Clinical Models for Predicting Neoadjuvant Chemotherapy Response in Breast Cancer. Breast Cancer Res. Treat. 2022, 193, 121–138. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Chen, Y.; Zhang, Y.; Wang, L.; Luo, R.; Wu, H.; Wu, C.; Zhang, H.; Tan, W.; Yin, H.; et al. A Deep Learning Model Integrating Mammography and Clinical Factors Facilitates the Malignancy Prediction of BI-RADS 4 Microcalcifications in Breast Cancer Screening. Eur. Radiol. 2021, 31, 5902–5912. [Google Scholar] [CrossRef] [PubMed]
- Nastase, I.-N.A.; Moldovanu, S.; Biswas, K.C.; Moraru, L. Role of Inter- and Extra-Lesion Tissue, Transfer Learning, and Fine-Tuning in the Robust Classification of Breast Lesions. Sci. Rep. 2024, 14, 22754. [Google Scholar] [CrossRef]
- Li, C.; Wong, C.; Zhang, S.; Usuyama, N.; Liu, H.; Yang, J.; Naumann, T.; Poon, H.; Gao, J. Llava-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day. Adv. Neural Inf. Process. Syst. 2023, 36, 28541–28564. [Google Scholar]
Specifications | Pre-Training and Transfer Learning and Validation Set | Internal Testing Set | External Testing Set |
---|---|---|---|
Patients (685 patients from 2 centers; 252 patients developed cancers) | |||
Age | |||
<30 | 164 (44.9%) | 48 (28.7%) | 44 (28.8%) |
30~49 | 183 (50.1%) | 113 (67.7%) | 104 (68.0%) |
50~69 | 17 (4.7%) | 6 (3.6%) | 5 (3.2%) |
≥70 | 1 (0.3%) | 0 | 0 |
Diagnostic methods | |||
Biopsy | 258 (70.7%) | 87 (52.1%) | 153 (100%) |
Follow-up | 107 (29.3%) | 80 (47.9%) | 0 |
Lesions (852 lesions and 256 malignant lesions, including 102 BI-RADS 3 malignant lesions) | |||
Lesion size (mm2) | |||
<5 | 18 (4.1%) | 29 (13.3%) | 19 (9.5%) |
5~9.9 | 103 (23.7%) | 126 (57.8%) | 112 (56.0%) |
10~19.9 | 209 (48.2%) | 60 (27.5%) | 63 (31.5%) |
≥20 | 104 (24.0%) | 3 (1.4%) | 6 (3.0%) |
Lesion width (mm) | |||
<5 | 122 (28.1%) | 133 (61.0%) | 112 (56.0%) |
5~9.9 | 224 (51.6%) | 78 (35.8%) | 78 (39.0%) |
10~19.9 | 83 (19.1%) | 7 (3.2%) | 10 (5.0%) |
≥20 | 5 (1.2%) | 0 | 0 |
Aspect ratio | |||
≥1 | 7 (1.6%) | 2 (0.9%) | 5 (2.5%) |
<1 | 427 (98.4%) | 216 (99.1%) | 195 (97.5%) |
Boundary | |||
Circumscribed | 375 (86.4%) | 203 (93.1%) | 169 (84.5%) |
Others | 59 (13.6%) | 15 (6.9%) | 31 (15.5%) |
Morphology | |||
Regular | 389 (89.6%) | 210 (96.3%) | 178 (89%) |
Others | 45 (10.4%) | 8 (3.7%) | 22 (11%) |
Blood flow spectrum | |||
Pulsating | 7 (1.6%) | 0 | 3 (1.5%) |
Others | 427 (98.4%) | 218 (100%) | 197 (98.5%) |
Malignant type (biopsy result) | |||
Invasive carcinoma | 113 (48.3%) | 6 (42.9%) | 4 (50%) |
Carcinoma in situ | 93 (39.7%) | 7 (50%) | 4 (50%) |
Mucinous Adenocarcinoma | 18 (7.7%) | 0 | 0 |
Others | 10 (4.3%) | 1 (7.1%) | 0 |
Models | SEN | SPE | ACC | AUC (95% CI) | NPV | PPV |
---|---|---|---|---|---|---|
VGG16 | 0.571 (8/14) | 0.451 (92/204) | 0.459 (100/218) | 0.638 (0.530–0.691) | 0.862 (100/116) | 0.064 (8/125) |
SENet50 | 0.643 (9/14) | 0.475 (97/204) | 0.486 (106/218) | 0.674 (0.551–0.729) | 0.912 (115/126) | 0.076 (9/118) |
EfficientNet | 0.714 (10/14) | 0.480 (98/204) | 0.491 (107/218) | 0.681 (0.613–0.737) | 0.919 (114/124) | 0.086 (10/117) |
ResNet50 | 0.714 (10/14) | 0.505 (103/204) | 0.518 (113/218) | 0.717 (0.652–0.776) | 0.970 (127/131) | 0.090 (10/111) |
Models | SEN | SPE | ACC | AUC (95% CI) | NPV | PPV | |
---|---|---|---|---|---|---|---|
Validation | Sig-ch (B-mode) | 0.75 (12/16) | 0.563 (9/16) | 0.656 (21/32) | 0.777 (0.596–0.905) | 0.692 (9/13) | 0.632 (12/19) |
Sig-ch (Doppler) | 0.75 (12/16) | 0.50 0 (8/16) | 0.625 (20/32) | 0.770 (0.587–0.899) | 0.667 (8/12) | 0.600 (12/20) | |
Dou-ch (CTL) | 0.812 (13/16) | 0.750 (12/16) | 0.781 (25/32) | 0.820 (0.645–0.933) | 0.800 (12/15) | 0.765 (13/17) | |
EBCV model | 0.812 (13/16) | 0.875 (14/16) | 0.842 (27/32) | 0.859 (0.691–0.956) | 0.824 (14/17) | 0.867 (13/15) | |
Testing | Sig-ch (B-mode) | 0.714 (10/14) | 0.505 (103/204) | 0.518 (113/218) | 0.717 (0.652–0.776) | 0.970 (127/131) | 0.090 (10/111) |
Sig-ch (Doppler) | 0.857 (12/14) | 0.466 (89/204) | 0.463 (101/218) | 0.700 (0.635–0.760) | 0.978 (89/91) | 0.945 (12/127) | |
Dou-ch (CTL) | 0.785 (11/14) | 0.534 (109/204) | 0.550 (120/218) | 0.721 (0.656–0.779) | 0.976 (121/124) | 0.104 (11/106) | |
EBCV model | 0.785 (11/14) | 0.833 (170/204) | 0.830 (181/218) | 0.880 (0.829–0.920) | 0.983 (170/173) | 0.244 (11/45) |
Models | SEN | SPE | ACC | AUC (95% CI) | NPV | PPV |
---|---|---|---|---|---|---|
Radiologist 1 | 0.125 (1/8) | 0.969 (186/192) | 0.935 (187/200) | N/A | 0.963 (186/193) | 0.143 (1/7) |
Radiologist 2 | 0.125 (1/8) | 0.979 (188/192) | 0.945 (189/200) | N/A | 0.969 (188/194) | 0.200 (1/5) |
Radiologist 3 | 0.125 (1/8) | 0.974 (187/192) | 0.940 (188/200) | N/A | 0.964 (187/194) | 0.167 (1/6) |
Radiologist 4 | 0 (0/8) | 0.989 (190/192) | 0.950 (190/200) | N/A | 0.960 (190/198) | 0 |
Radiologist 5 | 0 (0/8) | 0.989 (190/192) | 0.950 (190/200) | N/A | 0.960 (190/198) | 0 |
Radiologist 6 | 0.125 (1/8) | 0.995 (191/192) | 0.960 (192/200) | N/A | 0.965 (191/198) | 0.500 (1/2) |
Radiologists’ Avg | 0.125 (1/8) | 1.00 (192/192) | 0.965 (193/200) | 0.653 (0.583–0.719) | 0.965 (192/199) | 1.00 (1/1) |
EBCV Model | 0.875 (7/8) | 0.786 (151/192) | 0.790 (158/200) | 0.910 (0.861–0.945) | 0.993 (151/152) | 0.146 (7/48) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, C.; Li, C.; Lin, G. Predicting Very Early-Stage Breast Cancer in BI-RADS 3 Lesions of Large Population with Deep Learning. J. Imaging 2025, 11, 240. https://doi.org/10.3390/jimaging11070240
Wang C, Li C, Lin G. Predicting Very Early-Stage Breast Cancer in BI-RADS 3 Lesions of Large Population with Deep Learning. Journal of Imaging. 2025; 11(7):240. https://doi.org/10.3390/jimaging11070240
Chicago/Turabian StyleWang, Congyu, Changzhen Li, and Gengxiao Lin. 2025. "Predicting Very Early-Stage Breast Cancer in BI-RADS 3 Lesions of Large Population with Deep Learning" Journal of Imaging 11, no. 7: 240. https://doi.org/10.3390/jimaging11070240
APA StyleWang, C., Li, C., & Lin, G. (2025). Predicting Very Early-Stage Breast Cancer in BI-RADS 3 Lesions of Large Population with Deep Learning. Journal of Imaging, 11(7), 240. https://doi.org/10.3390/jimaging11070240