HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy
Abstract
:1. Introduction
- We propose a segmentation network called HaTU-Net to segment ovaries and follicles from TVUS images.
- We propose using harmonic convolution [33] to replace the standard convolutional filter. The input image is first decomposed using the discrete cosine transform (DCT); these transformed signals are combined using learned weights.
- We developed harmonic attention (HA) block to improve feature discriminability between the target and background pixels in the segmentation stage. The HA block encourages the features by avoiding the artifacts, and support for the HaTU-Net leads to improved segmentation results.
2. Material and Methods
2.1. Dataset
2.2. HaTU-Net Architecture
2.3. Feature Extraction with Harmonic Convolution
2.4. Harmonic Attention Block
2.5. Cost Function
2.6. Follicle Counting
- Load ground truth mask and predicted mask images.
- Measure follicle diameters on both images in pixels.
- Convert pixel diameters to physical measurements.
- Exclude follicles sized outside the recruitable range of 2–10 mm in diameter by converting pixels black, as counting antral follicles < 2 mm in diameter might heighten the chances of counting small anechoic structures like vessels or artifacts; whereas counting dominant follicles > 10 mm lack the evidence of clinical practicality [3].
- Compute the dice similarity coefficient (DSC) between the ground truth mask and predicted mask images.
- Calculate the number of correctly detected follicles from the predicted mask (follicles with >0.5 Dice coefficient score are considered).
- Calculate the number of detected follicles from the predicted mask.
- Calculate the number of actual follicles from the ground truth mask.
- Evaluate the precision and recall of our predicted follicle counting with the formula from [37].
3. Experimental Design and Results
3.1. Implementation Details
3.2. Ovary Segmentation
3.3. Follicle Segmentation
3.4. Ablation Study
3.5. Follicle Counting
3.6. Discussion and Limitations
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Jirge, P.R. Poor ovarian reserve. J. Hum. Reprod. Sci. 2016, 9, 63. [Google Scholar] [CrossRef] [PubMed]
- Rosen, M.P.; Johnstone, E.; Addauan-Andersen, C.; Cedars, M.I. A lower antral follicle count is associated with infertility. Fertil. Steril. 2011, 95, 1950–1954. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Coelho Neto, M.A.; Ludwin, A.; Borrell, A.; Benacerraf, B.; Dewailly, D.; da Silva Costa, F.; Condous, G.; Alcazar, J.L.; Jokubkiene, L.; Guerriero, S.; et al. Counting ovarian antral follicles by ultrasound: A practical guide. Ultrasound Obstet. Gynecol. 2018, 51, 10–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Faghih, R.T.; Styer, A.K.; Brown, E.N. Automated ovarian follicular monitoring: A novel real-time approach. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, Republic of Korea, 11–15 July 2017; pp. 632–635. [Google Scholar]
- Wertheimer, A.; Nagar, R.; Oron, G.; Meizner, I.; Fisch, B.; Ben-Haroush, A. Fertility Treatment Outcomes After Follicle Tracking With Standard 2-Dimensional Sonography Versus 3-Dimensional Sonography-Based Automated Volume Count: Prospective Study. J. Ultrasound Med. 2018, 37, 859–866. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wu, W.T.; Chen, L.R.; Chang, H.C.; Chang, K.V.; Özçakar, L. Quantitative ultrasonographic analysis of changes of the suprascapular nerve in the aging population with shoulder pain. Front. Bioeng. Biotechnol. 2021, 9, 121. [Google Scholar] [CrossRef]
- Thomson, H.; Yang, S.; Cochran, S. Machine learning-enabled quantitative ultrasound techniques for tissue differentiation. J. Med. Ultrason. 2022, 49, 517–528. [Google Scholar] [CrossRef]
- Li, H.; Fang, J.; Liu, S.; Liang, X.; Yang, X.; Mai, Z.; Van, M.T.; Wang, T.; Chen, Z.; Ni, D. CR-Unet: A composite network for ovary and follicle segmentation in ultrasound images. IEEE J. Biomed. Health Inform. 2019, 24, 974–983. [Google Scholar] [CrossRef]
- Hiremath, P.; Tegnoor, J.R. Recognition of follicles in ultrasound images of ovaries using geometric features. In Proceedings of the 2009 International Conference on Biomedical and Pharmaceutical Engineering, Singapore, 2–4 December 2009; pp. 1–8. [Google Scholar]
- Deng, Y.; Wang, Y.; Chen, P. Automated detection of polycystic ovary syndrome from ultrasound images. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4772–4775. [Google Scholar]
- Hiremath, P.; Tegnoor, J.R. Automatic detection of follicles in ultrasound images of ovaries using edge-based method. IJCA Spec. Issue RTIPPR 2010, 2, 120–125. [Google Scholar]
- Potočnik, B.; Zazula, D.; Korže, D. Automated computer-assisted detection of follicles in ultrasound images of ovary. J. Med Syst. 1997, 21, 445–457. [Google Scholar] [CrossRef]
- Sultana, F.; Sufian, A.; Dutta, P. Evolution of image segmentation using deep convolutional neural network: A survey. Knowl. Based Syst. 2020, 201, 106062. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany; pp. 234–241. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Romera, E.; Alvarez, J.M.; Bergasa, L.M.; Arroyo, R. Erfnet: Efficient residual factorized convnet for real-time semantic segmentation. IEEE Trans. Intell. Transp. Syst. 2017, 19, 263–272. [Google Scholar] [CrossRef]
- Yu, C.; Gao, C.; Wang, J.; Yu, G.; Shen, C.; Sang, N. Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation. arXiv 2020, arXiv:2004.02147. [Google Scholar] [CrossRef]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
- Hassanien, M.A.; Singh, V.K.; Puig, D.; Abdel-Nasser, M. Predicting Breast Tumor Malignancy Using Deep ConvNeXt Radiomics and Quality-Based Score Pooling in Ultrasound Sequences. Diagnostics 2022, 12, 1053. [Google Scholar] [CrossRef]
- Liu, X.; Song, L.; Liu, S.; Zhang, Y. A review of deep-learning-based medical image segmentation methods. Sustainability 2021, 13, 1224. [Google Scholar] [CrossRef]
- Awan, M.J.; Rahim, M.S.M.; Salim, N.; Rehman, A.; Garcia-Zapirain, B. Automated knee MR images segmentation of anterior cruciate ligament tears. Sensors 2022, 22, 1552. [Google Scholar] [CrossRef]
- Shamim, S.; Awan, M.J.; Mohd Zain, A.; Naseem, U.; Mohammed, M.A.; Garcia-Zapirain, B. Automatic COVID-19 Lung Infection Segmentation through Modified Unet Model. J. Healthc. Eng. 2022, 2022, 6566982. [Google Scholar] [CrossRef]
- Meng, Y.; Wei, M.; Gao, D.; Zhao, Y.; Yang, X.; Huang, X.; Zheng, Y. CNN-GCN aggregation enabled boundary regression for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 352–362. [Google Scholar]
- Jose, J.M.; Sindagi, V.; Hacihaliloglu, I.; Patel, V.M. KiU-Net: Towards accurate segmentation of biomedical images using over-complete representations. arXiv 2020, arXiv:2006.04878. [Google Scholar]
- Singh, V.K.; Abdel-Nasser, M.; Akram, F.; Rashwan, H.A.; Sarker, M.M.K.; Pandey, N.; Romani, S.; Puig, D. Breast tumor segmentation in ultrasound images using contextual-information-aware deep adversarial learning framework. Expert Syst. Appl. 2020, 162, 113870. [Google Scholar] [CrossRef]
- Yang, X.; Yu, L.; Li, S.; Wen, H.; Luo, D.; Bian, C.; Qin, J.; Ni, D.; Heng, P.A. Towards automated semantic segmentation in prenatal volumetric ultrasound. IEEE Trans. Med. Imaging 2018, 38, 180–193. [Google Scholar] [CrossRef] [PubMed]
- Mathur, P.; Kakwani, K.; Kudavelly, S.; Ramaraju, G. Deep Learning based Quantification of Ovary and Follicles using 3D Transvaginal Ultrasound in Assisted Reproduction. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 2109–2112. [Google Scholar]
- Gupta, S.; Kudavelly, S.R.; Ramaraju, G. Ovarian assessment using deep learning based 3D ultrasound super resolution. In Proceedings of the Medical Imaging 2021: Computer-Aided Diagnosis. International Society for Optics and Photonics, Online, 15–19 February 2021; Volume 11597, p. 115970K. [Google Scholar]
- Yang, X.; Li, H.; Wang, Y.; Liang, X.; Chen, C.; Zhou, X.; Zeng, F.; Fang, J.; Frangi, A.; Chen, Z.; et al. Contrastive Rendering with Semi-supervised Learning for Ovary and Follicle Segmentation from 3D Ultrasound. Med. Image Anal. 2021, 73, 102134. [Google Scholar] [CrossRef] [PubMed]
- Ulicny, M.; Krylov, V.A.; Dahyot, R. Harmonic Convolutional Networks based on Discrete Cosine Transform. arXiv 2020, arXiv:2001.06570. [Google Scholar] [CrossRef]
- Alom, M.Z.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Nuclei segmentation with recurrent residual convolutional neural networks based U-Net (R2U-Net). In Proceedings of the NAECON 2018-IEEE National Aerospace and Electronics Conference, Dayton, OH, USA, 23–26 July 2018; pp. 228–233. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar]
- Gougeon, A.; Lefèvre, B. Evolution of the diameters of the largest healthy and atretic follicles during the human menstrual cycle. Reproduction 1983, 69, 497–502. [Google Scholar] [CrossRef]
- Sonigo, C.; Jankowski, S.; Yoo, O.; Trassard, O.; Bousquet, N.; Grynberg, M.; Beau, I.; Binart, N. High-throughput ovarian follicle counting by an innovative deep learning approach. Sci. Rep. 2018, 8, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. arXiv 2019, arXiv:1912.01703. [Google Scholar]
- Lalande, A.; Garreau, M.; Frouin, F. Evaluation of cardiac structure segmentation in cine magnetic resonance imaging. In Multi-Modality Cardiac Imaging: Processing and Analysis; Wiley: Hoboken, NJ, USA, 2015; pp. 169–215. [Google Scholar]
Dataset | Subset | Number of Images |
---|---|---|
TVUS ovary | Train | 466 |
Validation | 166 | |
Test | 141 |
Hyperparameter | Value |
---|---|
Input image size | |
Pixel value normalize | 0–1 |
Learning rate | 0.0002 |
Adam optimizer | = 0.5, = 0.999 |
Epochs | 50 |
Batch size | 4 |
Data augmentation | rotation 15 degree and horizontal flipping |
Methods | Accuracy | Dice | IoU | Sensitivity | Specificity |
---|---|---|---|---|---|
U-Net [15] | |||||
Attention U-Net [17] | |||||
R2U-Net [34] | |||||
U-Net++ [35] | |||||
DeepLabv3+ [18] | |||||
Baseline | |||||
HaTU-Net |
Methods | Accuracy | Dice | IoU | Sensitivity | Specificity |
---|---|---|---|---|---|
U-Net [15] | |||||
Attention U-Net [17] | |||||
R2U-Net [34] | |||||
U-Net++ [35] | |||||
DeepLabv3+ [18] | |||||
Baseline | |||||
HaTU-Net |
Dataset | Loss Function | Accuracy | Dice | IoU | Sensitivity | Specificity |
---|---|---|---|---|---|---|
Ovary | BCE | |||||
Dice Loss | ||||||
BCE + Dice | ||||||
BCE + Focal | ||||||
Follicle | BCE | |||||
Dice Loss | ||||||
BCE + Dice | ||||||
BCE + Focal |
Methods | U-Net | Attention U-Net | R2U-Net | U-Net++ | DeepLabv3+ | HaTU-Net |
---|---|---|---|---|---|---|
Total No. of Images | 141 | |||||
No. of Real Follicles | 378 | |||||
No. of Detected Follicles | 447 | 438 | 482 | 453 | 488 | 448 |
No. of Correctly Detected Follicles | 315 | 303 | 350 | 339 | 328 | 344 |
Precision (%) | 70.47 | 69.18 | 72.61 | 74.83 | 67.21 | 76.69 |
Recall (%) | 83.33 | 80.16 | 90.59 | 89.68 | 86.77 | 91.01 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Singh, V.K.; Yousef Kalafi, E.; Cheah, E.; Wang, S.; Wang, J.; Ozturk, A.; Li, Q.; Eldar, Y.C.; Samir, A.E.; Kumar, V. HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy. Diagnostics 2022, 12, 3213. https://doi.org/10.3390/diagnostics12123213
Singh VK, Yousef Kalafi E, Cheah E, Wang S, Wang J, Ozturk A, Li Q, Eldar YC, Samir AE, Kumar V. HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy. Diagnostics. 2022; 12(12):3213. https://doi.org/10.3390/diagnostics12123213
Chicago/Turabian StyleSingh, Vivek Kumar, Elham Yousef Kalafi, Eugene Cheah, Shuhang Wang, Jingchao Wang, Arinc Ozturk, Qian Li, Yonina C. Eldar, Anthony E. Samir, and Viksit Kumar. 2022. "HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy" Diagnostics 12, no. 12: 3213. https://doi.org/10.3390/diagnostics12123213
APA StyleSingh, V. K., Yousef Kalafi, E., Cheah, E., Wang, S., Wang, J., Ozturk, A., Li, Q., Eldar, Y. C., Samir, A. E., & Kumar, V. (2022). HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy. Diagnostics, 12(12), 3213. https://doi.org/10.3390/diagnostics12123213