OTC-NET: A Multimodal Method for Accurate Diagnosis of Ovarian Cancer in O-RADS Category 4 Masses
Simple Summary
Abstract
1. Introduction
2. Materials and Methods
2.1. Study Population
2.2. OTC-NET Network Architecture
2.3. Study Methods
2.4. Statistical Analysis
3. Results
3.1. Basic Information of Patients
3.2. OTC-NET’s Performance
3.3. DL Model’s Performance
3.4. Comparison of OTC-NET, DenseNet201, and Radiologists
3.5. OTC-NET–Assisted Diagnosis by Radiologists
3.6. Interpretability Analysis
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| O-RADS | Ovarian-Adnexal Reporting and Data System | 
| US | Ultrasonography | 
| AUC | Area under the receiver operating characteristic curve | 
| DL | Deep Learning | 
| YI | Youden’s index. | 
| AI | Artificial intelligence | 
References
- Han, B.; Zheng, R.; Zeng, H.; Wang, S.; Sun, K.; Chen, R.; Li, L.; Wei, W.; He, J. Cancer incidence and mortality in China, 2022. J. Natl. Cancer Cent. 2024, 4, 47–53. [Google Scholar] [CrossRef]
- Baker, V.V. Treatment Options for Ovarian Cancer. Clin. Obstet. Gynecol. 2001, 44, 522–530. [Google Scholar] [CrossRef]
- Terzic, M.; Rapisarda, A.M.C.; Della Corte, L.; Manchanda, R.; Aimagambetova, G.; Norton, M.; Garzon, S.; Riemma, G.; King, C.R.; Chiofalo, B.; et al. Diagnostic work-up in paediatric and adolescent patients with adnexal masses: An evidence-based approach. J. Obstet. Gynaecol. 2021, 41, 503–515. [Google Scholar] [CrossRef]
- Sadowski, E.A.; Stein, E.B.; Thomassin-Naggara, I.; Rockall, A.; Nougaret, S.; Reinhold, C.; Maturen, K.E. O-RADS MRI After Initial Ultrasound for Adnexal Lesions: AJR Expert Panel Narrative Review. Am. J. Roentgenol. 2023, 220, 6–15. [Google Scholar] [CrossRef]
- Andreotti, R.F.; Timmerman, D.; Strachowski, L.M.; Froyman, W.; Benacerraf, B.R.; Bennett, G.L.; Bourne, T.; Brown, D.L.; Coleman, B.G.; Frates, M.C.; et al. O-RADS US risk stratification and management system: A consensus guideline from the ACR Ovarian-Adnexal Reporting and Data System Committee. Radiology 2020, 294, 168–185. [Google Scholar] [CrossRef]
- Vara, J.; Manzour, N.; Chacon, E.; Lopez-Picazo, A.; Linares, M.; Pascual, M.A.; Guerriero, S.; Alcazar, J.L. Ovarian Adnexal Reporting Data System (O-RADS) for classifying adnexal masses: A systematic review and meta-analysis. Cancers 2022, 14, 3151. [Google Scholar] [CrossRef]
- Lee, S.; Lee, J.E.; Hwang, J.A.; Shin, H. O-RADS US: A Systematic Review and Meta-Analysis of Category-specific Malignancy Rates. Radiology 2023, 308, e223269. [Google Scholar] [CrossRef]
- Fleming, N.D.; Westin, S.N.; Meyer, L.A.; Shafer, A.; Rauh-Hain, J.A.; Onstad, M.; Cobb, L.; Bevers, M.; Fellman, B.M.; Burzawa, J. Correlation of surgeon radiology assessment with laparoscopic disease site scoring in patients with advanced ovarian cancer. Int. J. Gynecol. Cancer 2021, 31, 92–97. [Google Scholar] [CrossRef]
- Buranaworathitikul, P.; Wisanumahimachai, V.; Phoblap, N.; Porngasemsart, Y.; Rugfoong, W.; Yotchana, N.; Uthaichalanont, P.; Jiampochaman, T.; Kunanukulwatana, C.; Thiamkaew, A. Accuracy of O-RADS System in Differentiating Between Benign and Malignant Adnexal Masses Assessed via External Validation by Inexperienced Gynecologists. Cancers 2024, 16, 3820. [Google Scholar] [CrossRef]
- Timmerman, D.; Ameye, L.; Fischerova, D.; Epstein, E.; Melis, G.B.; Guerriero, S.; Van Holsbeke, C.; Savelli, L.; Fruscio, R.; Lissoni, A.A.; et al. Simple ultrasound rules to distinguish between benign and malignant adnexal masses before surgery: Prospective validation by IOTA group. Bmj 2010, 341, c6839. [Google Scholar] [CrossRef]
- Wang, J.; Zhu, H.; Wang, S.-H.; Zhang, Y.-D. A review of deep learning on medical image analysis. Mob. Netw. Appl. 2021, 26, 351–380. [Google Scholar] [CrossRef]
- Suganyadevi, S.; Seethalakshmi, V.; Balasamy, K. A review on deep learning in medical image analysis. Int. J. Multimed. Inf. Retr. 2022, 11, 19–38. [Google Scholar] [CrossRef]
- Chen, H.; Yang, B.-W.; Qian, L.; Meng, Y.-S.; Bai, X.-H.; Hong, X.-W.; He, X.; Jiang, M.-J.; Yuan, F.; Du, Q.-W. Deep learning prediction of ovarian malignancy at US compared with O-RADS and expert assessment. Radiology 2022, 304, 106–113. [Google Scholar] [CrossRef]
- Christiansen, F.; Epstein, E.; Smedberg, E.; Åkerlund, M.; Smith, K.; Epstein, E. Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: Comparison with expert subjective assessment. Ultrasound Obstet. Gynecol. 2021, 57, 155–163. [Google Scholar] [CrossRef]
- Gao, Y.; Zeng, S.; Xu, X.; Li, H.; Yao, S.; Song, K.; Li, X.; Chen, L.; Tang, J.; Xing, H. Deep learning-enabled pelvic ultrasound images for accurate diagnosis of ovarian cancer in China: A retrospective, multicentre, diagnostic study. Lancet Digit. Health 2022, 4, e179–e187. [Google Scholar] [CrossRef]
- Xiang, H.; Xiao, Y.; Li, F.; Li, C.; Liu, L.; Deng, T.; Yan, C.; Zhou, F.; Wang, X.; Ou, J. Development and validation of an interpretable model integrating multimodal information for improving ovarian cancer diagnosis. Nat. Commun. 2024, 15, 2681. [Google Scholar] [CrossRef]
- Ruan, Y.; Liu, Z.; Yang, Y.; Feng, L.; Liu, P. A Comparative Study of Deep Learning Models and Expert Diagnosis in benign and malignant Classification of Ovarian Masses Based on Ultrasound Images. In Proceedings of the 2025 International Conference on Health Big Data, Kunming, China, 28–30 March 2025. [Google Scholar]
- Severinski, K.; Cvija, T. Medical data annotation and json to dataset conversion using LabelMe and Python. Ri-STEM-2021 2021, 2021, 27. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Xu, W.; Fu, Y.-L.; Zhu, D. ResNet and its application to medical image processing: Research progress and challenges. Comput. Methods Programs Biomed. 2023, 240, 107660. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Salman, H.A.; Kalakech, A.; Steiti, A. Random forest algorithm overview. Babylon. J. Mach. Learn. 2024, 2024, 69–79. [Google Scholar] [CrossRef]
- Zhang, Q. A novel ResNet101 model based on dense dilated convolution for image classification. SN Appl. Sci. 2022, 4, 9. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Strachowski, L.M.; Jha, P.; Phillips, C.H.; Blanchette Porter, M.M.; Froyman, W.; Glanc, P.; Guo, Y.; Patel, M.D.; Reinhold, C.; Suh-Burgmann, E.J. O-RADS US v2022: An update from the American College of radiology’s ovarian-adnexal reporting and data system US committee. Radiology 2023, 308, e230685. [Google Scholar] [CrossRef]
- ŞAHiN, E.; Arslan, N.N.; Özdemir, D. Applications. Unlocking the black box: An in-depth review on interpretability, explainability, and reliability in deep learning. Neural Comput. Appl. 2025, 37, 859–965. [Google Scholar] [CrossRef]
- Gou, F.; Liu, J.; Xiao, C.; Wu, J. Research on artificial-intelligence-assisted medicine: A survey on medical artificial intelligence. Diagnostics 2024, 14, 1472. [Google Scholar] [CrossRef] [PubMed]










| Characteristic | Training | Validation | Test | p Value | 
|---|---|---|---|---|
| No. of patients | 362 | 51 | 105 | |
| Histological type | 0.930 | |||
| Benign | 187 (51.7) | 26 (51.0) | 55 (52.4) | |
| Malignant | 175 (48.3) | 25 (49.0) | 50 (47.6) | |
| Age at diagnosis (y) | 0.341 | |||
| Mean ± SD | 47 ± 15 | 47 ± 13 | 47 ± 17 | |
| Range | 13–89 | 12–73 | 11–81 | |
| Largest lesion diameter (mm) | 0.217 | |||
| Mean ± SD | 124.9 ± 70.7 | 119.6 ± 59.7 | 129.7 ± 74.6 | |
| Range | 16–600 | 25–303 | 25–526 | |
| CA125 (U/mL) | 0.236 | |||
| >35 | 169 (46.7) | 23 (45.1) | 46 (43.8) | |
| ≤35 | 193 (53.3) | 28 (54.9) | 59 (56.2) | |
| Menopausal status | 0.422 | |||
| Premenopausal | 205 (56.6) | 25 (49.0) | 55 (52.4) | |
| Postmenopausal | 157 (43.4) | 26 (51.0) | 50 (47.6) | |
| Feature Set | AUC (95% CI) | Accuracy | Sensitivity | Specificity | |
|---|---|---|---|---|---|
| FC1 | CA125, Diameter, Menopause, Confidence | 0.81 (0.72–0.89) | 0.733 | 0.660 | 0.800 | 
| FC2 | CA125, Confidence | 0.80 (0.72–0.89) | 0.733 | 0.700 | 0.764 | 
| FC3 | CA125, Menopause, Confidence | 0.80 (0.72–0.89) | 0.724 | 0.640 | 0.800 | 
| FC4 | CA125, Diameter, Confidence | 0.79 (0.71–0.88) | 0.714 | 0.640 | 0.782 | 
| FC5 | CA125, Age, Diameter, Menopause, Confidence | 0.78 (0.69–0.87) | 0.695 | 0.580 | 0.800 | 
| FC6 | Age, Confidence | 0.76 (0.66–0.85) | 0.714 | 0.560 | 0.855 | 
| FC7 | Menopause, Confidence | 0.75 (0.66–0.84) | 0.667 | 0.560 | 0.764 | 
| FC8 | Diameter, Confidence | 0.74 (0.65–0.83) | 0.724 | 0.660 | 0.782 | 
| Module | AUC (95%CI) | Accuracy | Sensitivity | Specificity | YI | 
|---|---|---|---|---|---|
| DenseNet201 | 0.76 (0.66–0.85) | 0.733 | 0.640 | 0.818 | 0.458 | 
| ResNet34 | 0.73 (0.63–0.83) | 0.648 | 0.520 | 0.764 | 0.284 | 
| MobileNet_V2 | 0.71 (0.61–0.80) | 0.686 | 0.640 | 0.727 | 0.367 | 
| ResNet101 | 0.70 (0.59–0.80) | 0.629 | 0.500 | 0.745 | 0.245 | 
| VGG13 | 0.70 (0.58–0.80) | 0.657 | 0.640 | 0.672 | 0.312 | 
| DenseNe121 | 0.69 (0.58–0.79) | 0.619 | 0.740 | 0.509 | 0.249 | 
| EfficientNet_B5 | 0.68 (0.57–0.77) | 0.610 | 0.680 | 0.545 | 0.225 | 
| AUC (95% CI) | PR-AUC | Accuracy | Sensitivity | Specificity | YI | |
|---|---|---|---|---|---|---|
| OTC-NET | 0.81 (0.72–0.89) abc | 0.795 | 0.733 | 0.660 | 0.800 a | 0.460 | 
| DenseNet201 | 0.76 (0.66–0.85) a | 0.767 | 0.733 | 0.640 b | 0.818 ac | 0.458 | 
| Senior Radiologist | 0.68 (0.59–0.76) | 0.611 | 0.676 | 0.760 b | 0.600 b | 0.360 | 
| Intermediate Radiologist | 0.65 (0.53–0.70) | 0.638 | 0.657 | 0.420 ac | 0.873 ac | 0.293 | 
| Junior Radiologist | 0.62 (0.56–0.72) | 0.555 | 0.610 | 0.740 b | 0.491 b | 0.231 | 
| AI-Assisted | ||||||
|---|---|---|---|---|---|---|
| Radiologist | AUC (95% CI) | PR-AUC | Accuracy | Sensitivity | Specificity | NRI | 
| Junior Radiologist | 0.71 (0.61–0.79) | 0.645 | 0.705 | 0.720 | 0.691 | 0.180 | 
| (+0.090) ↑ | (+0.090) ↑ | (+0.095) ↑ | (−0.020) | (+0.200) ↑ | ||
| Intermediate Radiologist | 0.65 (0.57–0.73) | 0.629 | 0.657 | 0.460 | 0.836 | 0.004 | 
| (0.00) | (−0.009) | (0.00) | (+0.040) ↑ | (−0.037) | ||
| Senior Radiologist | 0.76 (0.69–0.85) | 0.701 | 0.762 | 0.780 | 0.745 | 0.165 | 
| (+0.080) ↑ | (+0.090) ↑ | (+0.086) ↑ | (+0.020) ↑ | (+0.145) ↑ | ||
| Prediction | Benign (Before AI) | Malignant (Before AI) | Benign (AI-Assisted) | Malignant (AI-Assisted) | ||
|---|---|---|---|---|---|---|
| True | ||||||
| Junior Radiologist | Benign | 27 | 28 | 38 | 17 | |
| Malignant | 13 | 37 | 14 | 36 | ||
| Intermediate Radiologist | Benign | 48 | 7 | 46 | 9 | |
| Malignant | 29 | 21 | 27 | 23 | ||
| Senior Radiologist | Benign | 33 | 22 | 41 | 14 | |
| Malignant | 12 | 38 | 11 | 39 | ||
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. | 
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, P.; Ruan, Y.; Fan, Y.; Li, P.; Liu, Z.; Wu, S.; Zheng, X.; Wu, X.; Liu, Y.; Liu, S. OTC-NET: A Multimodal Method for Accurate Diagnosis of Ovarian Cancer in O-RADS Category 4 Masses. Cancers 2025, 17, 3466. https://doi.org/10.3390/cancers17213466
Liu P, Ruan Y, Fan Y, Li P, Liu Z, Wu S, Zheng X, Wu X, Liu Y, Liu S. OTC-NET: A Multimodal Method for Accurate Diagnosis of Ovarian Cancer in O-RADS Category 4 Masses. Cancers. 2025; 17(21):3466. https://doi.org/10.3390/cancers17213466
Chicago/Turabian StyleLiu, Peizhong, Yidan Ruan, Yuling Fan, Ping Li, Zhuosheng Liu, Shengjie Wu, Xinying Zheng, Xiuming Wu, Yiting Liu, and Shunlan Liu. 2025. "OTC-NET: A Multimodal Method for Accurate Diagnosis of Ovarian Cancer in O-RADS Category 4 Masses" Cancers 17, no. 21: 3466. https://doi.org/10.3390/cancers17213466
APA StyleLiu, P., Ruan, Y., Fan, Y., Li, P., Liu, Z., Wu, S., Zheng, X., Wu, X., Liu, Y., & Liu, S. (2025). OTC-NET: A Multimodal Method for Accurate Diagnosis of Ovarian Cancer in O-RADS Category 4 Masses. Cancers, 17(21), 3466. https://doi.org/10.3390/cancers17213466
 
        



 
       