You are currently viewing a new version of our website. To view the old version click .
Diagnostics
  • Article
  • Open Access

11 May 2023

A Deep Learning Framework for the Prediction and Diagnosis of Ovarian Cancer in Pre- and Post-Menopausal Women

,
,
,
,
,
,
,
,
and
1
Department of Electrical Engineering, Harare Polytechnic College, Causeway Harare P.O. Box CY407, Zimbabwe
2
Department of Electrical, Computer and Telecommunications Engineering, Botswana International University of Science and Technology, Palapye 10071, Botswana
3
Department of Industrial and Mechatronics Engineering, Faculty of Engineering & the Built Environment, University of Zimbabwe, Mt. Pleasant, 630 Churchill Avenue, Harare, Zimbabwe
4
Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging

Abstract

Ovarian cancer ranks as the fifth leading cause of cancer-related mortality in women. Late-stage diagnosis (stages III and IV) is a major challenge due to the often vague and inconsistent initial symptoms. Current diagnostic methods, such as biomarkers, biopsy, and imaging tests, face limitations, including subjectivity, inter-observer variability, and extended testing times. This study proposes a novel convolutional neural network (CNN) algorithm for predicting and diagnosing ovarian cancer, addressing these limitations. In this paper, CNN was trained on a histopathological image dataset, divided into training and validation subsets and augmented before training. The model achieved a remarkable accuracy of 94%, with 95.12% of cancerous cases correctly identified and 93.02% of healthy cells accurately classified. The significance of this study lies in overcoming the challenges associated with the human expert examination, such as higher misclassification rates, inter-observer variability, and extended analysis times. This study presents a more accurate, efficient, and reliable approach to predicting and diagnosing ovarian cancer. Future research should explore recent advances in this field to enhance the effectiveness of the proposed method further.

1. Introduction

The yearly mortality for ovarian cancer is 151,900, making it the deadliest cancer globally [1]. According to Miller, it is women’s fifth highest cause of death. The most common kind of gynaecological carcinoma is ovarian cancer, which emanates from epithelial tissue, and 90% of the cases are due to this type. The five histologic carcinomas are Mucinous-Ovarian Cancer (M-OC), High-Grade Serous-Ovarian Cancer (H-GS-OC), Low-Grade Serous-Ovarian Cancer (L-GS-OC), Clear-Cell Ovarian Cancer (C-COC), and Endometrioid-Ovarian-Cancer (E-O-C), with poor prognosis at an advanced stage [2].
Early Identification boosts survival from 3% in Stage IV to about 90% in Stage I [3]. Van Haaften-Day et al. (2001) [4] found that Carbohydrate Antigen 125 (CA125) has been used for over four decades, but that its accuracy is not acceptable as it has not improved survivability. 50% of early-stage tumours, primarily type I ovarian cancers, and 92% of advanced-stage tumours, primarily type II ovarian cancers, have increased serum CA125 levels. Skates et al. [5] found that physiological variables influence normal CA125 serum concentrations and the presence of menopause impacts CA125 levels. Sopik et al. [6] found that benign illnesses also had increased CA125 levels, causing false positives. Only 20% of ovarian tumors express CA125. A screening’s anticipated sensitivity is around 80%. Moss et al. [7] confirmed that relying on the biomarker alone is misleading. Akinwunmi et al. [8] found that 1% of healthy people had increased serum levels, including 5% of patients with benign illnesses, e.g., endometriosis.
HE4, WFDC2, is overexpressed in Endometrioid ovarian cancer and less in epithelial tissues of the respiratory system and reproductive organs [9]. Yana-ranop et al. [10] found that the specificity of HE4 was 86% compared to that of CA125, and the AUC of it was 0.893 compared to that of CA125 0.865 [11]. HE4 levels vary in smokers (30%) and non-smokers (20%) [12]. Contraceptives impact HE4 levels. Ferraro et al. [13] found lower HE4 levels in oral contraceptive users (p = 0.008). Biopsy, imaging (US, CT, MRI, PET), and algorithms for learning (deep) Convolutional Neural networks (CNN) can predict and diagnose epithelial ovarian cancer (serous) accurately.
The challenge is that there is no effective screening method; hence, ovarian cancer is diagnosed when it has already advanced to Stage III or IV. Radiologists manually analyse and interpret medical images of a suspected cancer patient for cancer subtyping and staging. This results in the misclassification of the cancer subtypes, inter-observer variations, subjectivity, and time consumption. To address this, a deep CNN model was developed to predict and diagnose cancers.
Expert pathologists interpret Cellular morphology, defining OC categories and directing treatments [14]. Inter-observer variations cause inaccurate, suboptimal treatments, poor prognosis, and reduced life quality [15]. This shows the need for computational methods to accurately predict cancer class and diagnose cancer subtype.
Accurate prediction and diagnosis of the cell tumours are vital as they lead to proper prognosis and treatment, increasing survivability. Deep learning merits include:
  • Processing huge data and producing highly accurate predictions, reducing incorrect diagnoses.
  • Permitting early detection of ovarian carcinoma, increasing treatment success.
  • Permitting personalised treatment. Deep learning can predict how treatments affect different women, enabling personalised, efficient care.
Deep learning can improve patient outcomes and reduce mortality through early detection and personalised treatment. The algorithm could predict and diagnose the images in under 5 s with an F1-score of 0.94. The order of the paper is as follows: related work, the materials and methods used, followed by results and discussion, and, finally, the conclusion.

3. Materials and Methods

This section discusses the methodology used in the research, including the dataset preparation and the proposed Convolutional Neural Network (CNN) architecture. The dataset used in this project consists of 200 images, with 100 images each of serous ovarian cancer and non-cancerous samples. The original dataset was obtained from The Cancer Genome Atlas TCGA repository and was augmented to 11,040 images for both classes to be effectively used in the deep learning architecture (Kasture et al., 2021) [42]. This dataset was split into 80% for the training set and 20% for validation. The proposed CNN architecture is shown in Figure 5.
Figure 5. The proposed architecture of the model.
The preprocessed dataset was then fed for convolution operation. The preprocessing conducted on the images was the elimination of the images that had errors, such as the ones which were encoded badly and did not have the jpeg extension. The data was cleaned to remove the images that were distorted. Convolution is a special line function used to extract a feature using some feature detectors, also called a kernel, which are applied to all inputs, and a series of numbers called a tensor. The product-wise element between each kernel element and the input tensor is calculated in each tensor area and summarized to find the output value in the corresponding outgoing tensor area called the feature map. With striding and padding included, the formula for determining the output image dimension was as follows:
output image dimension = n + 2 p f s + 1   ×   n + 2 p f s + 1
where
  • n = Number of input pixels
  • f = Number of pixels of filter
  • p = Padding
  • s = Stride
To mimic the real-world representation mathematically of the behavior of neurons, the result of convolution was passed on to the non-linear activation function, ReLU, defined mathematically as f x = m a x ( 0 , x ) The activation function choice includes the fact that it does not engage all of the neurons at the same time and does not stimulate all of the neurons at the same time. Hence during backpropagation, not all the neurons are activated. Next, the data was passed onto the pooling layer, which provided a standard downtime that reduced the size of the feature map element to introduce translation flexibility in tiny shifts and distortions, and that reduced the number of subsequent parameters that could be learned. The research used Maxpooling. The data was then fed into the flattening layer and converted into a one-dimensional array or (vector) numbers for inputting into the fully connected layer. After that, the data is fed to the fully connected layer of feedforward neural networks. Every node in the previous layer is linked to the next layer’s nodes by a learnable weight. The features from the pooling layers are mapped to the network’s outputs. Lastly, the result was fed to the output layer, which used SoftMax for classification. The Loss function used was cross entropy and was given by the following formula for binary classification:
l = y l o g p + 1 y l o g 1 p
where:
  • l = Loss function
  • p = The predicted probability
  • y = 0 or 1 in binary classification
In conclusion, the proposed CNN architecture and dataset preparation for ovarian cancer classification has been discussed. The algorithm used to train the augmented image dataset has been summarized, and Table 2 summarizes the hyperparameters used in the CNN architecture.
Table 2. Hyperparameters and Configuration Settings of CNN for Image Classification.

4. Results

The model was fed with 11,040 images––half healthy cells, and the other half infected with serous cancer subtype. The number of epochs was chosen for hyperparameter selection and incremented in tens. The training and validation accuracy were recorded.
Table 3 shows a steady increase in the training and validation accuracy as the number of epochs increased from 10 to 50, reaching a training accuracy of 99.52% and a validation accuracy of 99.91% after 50 epochs.
Table 3. The number of epochs and the corresponding training and validation accuracies.
After the training process, the testing process was performed by uploading an image from the test dataset, and the algorithm would give the percentage of it being either serous or healthy cells. This result demonstrates the superiority of the Xception network, which achieves high training and validation accuracy with a small number of epochs. Compared to conventional generic convolutional neural networks, the model does not perform channel-wise convolution, reducing the number of connections and making the model lighter. As a result, excellent accuracy can be achieved with just 10 epochs (99.03% training accuracy and 94.43% validation accuracy).
The model accuracy shows that the percentage for training and validation accuracy rose as shown in Figure 6. This graph shows accuracy from 98.56% and 99.73% to 99.52% and 99.91% for 10 to 50 epochs, respectively.
Figure 6. Model Accuracy for both training and validation after 50 epochs.
The percentage loss (shown in Figure 7) for training and validation stood at 0.0445 and 0.0083 and 0.0147, and 0.0020 after 10 and 50 epochs, respectively.
Figure 7. Model loss for both training and validation after 50 epochs.
Figure 8 shows the 93.02% of the images were correctly classified as health, and 95.12% were classified as serous cells. From the confusion matrix, valuable performance metrics can be derived, such as:
Sensitivity = T P T P + F N = 93.02 93.02 + 4.88 = 0.9502
Figure 8. Confusion Matrix of the proposed model.
The sensitivity of the model is 0.9502, which represents the model’s ability to predict and classify the health cell images correctly as health when the label class is health.
Specificity = T N T N + F P = 95.12 95.12 + 6.98 = 0.9316
The specificity represents the model’s ability to correctly predict and classify the infected cells as infected cells when the class label is serous.
Recall = T P T P + F N = 0.9502
Recall measures how well the model can detect positive events, as in the health images class detection.
Precision = T P T P + F P = 93.02 93.02 + 6.98 = 0.9302
Precision measures how well the model has assigned the positive events to the positive class.
The   F - measure = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n = 2 × 0.9502 × 0.9302 0.9502 + 0.9302 = 0.94
The F-score is the harmonic mean of the model’s precision and recall, and it measures the model’s accuracy on the dataset of health and serous.
Table 4 shows that the precision for health is higher than that of serous cells by 2 units, and the recall for both is equal at 0.93. The F1 score is 0.94 for all classes. The classification report summarizes the performance of the Xception network on the medical images for the entire dataset.
Table 4. Classification Report for Health and Serous Classifications.
Table 5 shows the accuracy score for different classification models; deep hybrid learning, convolutional neural network, GoogleNet (v3), and linear discriminant analysis classification. The GoogleNet v3 achieved the highest accuracy of 0.925, and the linear discriminant achieved the lowest of 0.666. As a comparison, the accuracy of GoogleNet v3 can be attributed to the transfer learning process, whereby the features are input to a pre-trained architecture. The hyperparameters would have been tuned already. The method that was proposed outperformed the ones by other researchers. A classification accuracy of 0.94 can be attributed to the efficient use of the parameters. The network is built because it has batch normalization after every convolution, which helps fight the model’s overfitting.
Table 5. Comparison of Accuracy scores for different models.
Table 5 shows that a “Deep Hybrid Learning” model achieved an accuracy score of 0.907. This model combines deep learning and traditional machine learning techniques, such as decision trees or linear regression, to improve performance. The specific structure of the network used in the study is not provided. Still, it likely involves multiple layers of deep neural networks and incorporates techniques such as batch normalization to combat overfitting.

5. Discussion

In this study, we aimed to develop a deep-learning model to classify healthy and serous ovarian cancer cells accurately. Our model achieved a high accuracy of 94.43%, a sensitivity of 0.9502, and a specificity of 0.9316, demonstrating its potential to assist physicians in diagnosing and predicting ovarian cancer prognosis, ultimately improving patient outcomes.
Our findings build upon previous research in the field, with our model surpassing the performance of other models reported in the literature. The superior performance can be attributed to the depth-wise separable convolution, which reduces model complexity, and the absence of activation functions in intermediate layers and residual connections. These innovations enabled our model to achieve higher accuracy than the 80% reported by El-Nabawy et al. [25], the 66.66–87.5% by Sawyer et al. [29], and the 83.6% by Lu et al. [32].
However, the study has some limitations. A public TCGA dataset may not fully represent local patient populations, and the retrospective design does not account for disease progression. Additionally, the lack of clinical data, such as patient genetics, may affect model robustness. To address these issues, future research should involve partnerships with hospitals and research centers to validate the model on real-world data and incorporate clinical datasets.
Alternative explanations for the findings should also be considered. Model interpretation techniques could provide insights into the underlying factors driving the model’s predictions, and the model’s design can be further refined to improve performance.
The practical implications of our findings include the potential to support ovarian cancer diagnosis and monitoring in clinical settings. Our model also offers recommendations for future research, such as addressing limitations, improving model interpretability, and validating the model on diverse populations.

6. Conclusions

In summary, this study explored the potential of a deep learning model for predicting and diagnosing ovarian cancer. Our model achieved an accuracy of 94%, highlighting its promise for early prediction and diagnosis. The study contributes to the existing literature by demonstrating the potential of deep learning in preclinical cancer screening and personalized management of ovarian cancer.
The broader implications of our research include the potential impact on related fields, such as radiology, and real-world applications in ovarian cancer diagnosis and treatment optimization. Despite some limitations, our study offers suggestions for future research to build upon our findings, such as improving data quality and quantity, evaluating models on new independent datasets, and enhancing generalizability through diverse population studies.
Overall, this research emphasizes the potential of advanced computing techniques in the timely detection of ovarian cancer. By addressing current challenges and refining these models, we can develop more reliable and broadly applicable tools that can be adopted for practical medical purposes, ultimately improving cancer diagnosis, outcome prediction, and personalized medical care.

Author Contributions

Conceptualization, Q.A., M.A. (Mubarak Albathan) and A.H.; Data curation, A.Y., Q.A., M.B. and A.H.; Formal analysis, A.Y., T.M., M.U.T., M.B., M.A. (Muhammad Asim) and A.H.; Funding acquisition, M.A. (Mubarak Albathan); Investigation, B.Z., A.Y., T.M., M.U.T., M.A. (Mubarak Albathan), S.J. and A.H.; Methodology, B.Z., A.Y., T.M., M.U.T., S.J., M.B. and M.A. (Muhammad Asim); Project administration, M.A. (Mubarak Albathan); Resources, B.Z., T.M., M.U.T., S.J., M.B. and M.A. (Muhammad Asim); Software, B.Z., A.Y., T.M., M.U.T., Q.A., M.A. (Mubarak Albathan), M.B. and A.H.; Supervision, M.A. (Muhammad Asim); Validation, T.M., M.U.T., M.A. (Mubarak Albathan) and S.J.; Visualization, Q.A., M.A. (Mubarak Albathan) and M.A. (Muhammad Asim); Writing—original draft, B.Z., Q.A., S.J., M.B. and A.H.; Writing—review and editing, Q.A. and M.A. (Muhammad Asim). All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) for funding and supporting this work through Research Partnership Program no. RP-21-07-11.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Dataset will be available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Torre, L.A.; Bray, F.; Siegel, R.L.; Ferlay, J.; Lortet-Tieulent, J.; Jemal, A. Global cancer statistics, 2012. CA Cancer J. Clin. 2015, 65, 87–108. [Google Scholar] [CrossRef] [PubMed]
  2. Reid, B.M.; Permuth, J.B.; Sellers, T.A. Epidemiology of ovarian cancer: A review. Cancer Biol. Med. 2017, 14, 9–32. [Google Scholar] [CrossRef] [PubMed]
  3. Vázquez, M.A.; Mariño, I.P.; Blyuss, O.; Ryan, A.; Gentry-Maharaj, A.; Kalsi, J.; Manchanda, R.; Jacobs, I.; Menon, U.; Zaikin, A. A quantitative performance study of two automatic methods for the diagnosis of ovarian cancer. Biomed. Signal Process. Control. 2018, 46, 86–93. [Google Scholar] [CrossRef] [PubMed]
  4. Van Haaften-Day, C.; Shen, Y.; Xu, F.; Yu, Y.; Berchuck, A.; Havrilesky, L.J.; De Bruijn HW, A.; Van Der Zee AG, J.; Bast, R.C.; Hacker, N.F. OVX1, Macrophage-Colony Stimulating Factor, and CA-125-II as Tumor Markers for Epithelial Ovarian Carcinoma A Critical Appraisal. Cancer Interdisciplin. Int. J. Am. Cancer Soc. 2001, 92, 2837–2844. [Google Scholar] [CrossRef]
  5. Skates, S.J.; Mai, P.; Horick, N.K.; Piedmonte, M.; Drescher, C.W.; Isaacs, C.; Armstrong, D.K.; Buys, S.S.; Rodriguez, G.C.; Horowitz, I.R.; et al. Large Prospective Study of Ovarian Cancer Screening in High-Risk Women: CA125 Cut-Point Defined by Menopausal Status. Cancer Prev. Res. 2011, 4, 1401–1408. [Google Scholar] [CrossRef]
  6. Sopik, V.; Rosen, B.; Giannakeas, V.; Narod, S.A. Why have ovarian cancer mortality rates declined? Part III. Prospects for the future. Gynecol. Oncol. 2015, 138, 757–761. [Google Scholar] [CrossRef]
  7. Moss, E.L.; Hollingworth, J.; Reynolds, T.M. The role of CA125 in clinical practice. J. Clin. Pathol. 2005, 58, 308–312. [Google Scholar] [CrossRef]
  8. Akinwunmi, B.O.; Babic, A.; Vitonis, A.F.; Cramer, D.W.; Titus, L.; Tworoger, S.S.; Terry, K.L. Chronic Medical Conditions and CA125 Levels among Women without Ovarian Cancer. Cancer Epidemiol. Biomark. Prev. 2018, 27, 1483–1490. [Google Scholar] [CrossRef]
  9. Drapkin, R.; von Horsten, H.H.; Lin, Y.; Mok, S.C.; Crum, C.P.; Welch, W.R.; Hecht, J.L. Human Epididymis Protein 4 (HE4) Is a Secreted Glycoprotein that Is Overexpressed by Serous and Endometrioid Ovarian Carcinomas. Cancer Res. 2005, 65, 2162–2169. [Google Scholar] [CrossRef]
  10. Yanaranop, M.; Anakrat, V.; Siricharoenthai, S.; Nakrangsee, S.; Thinkhamrop, B. Is the Risk of Ovarian Malignancy Algorithm Better Than Other Tests for Predicting Ovarian Malignancy in Women with Pelvic Masses? Gynecol. Obstet. Investig. 2017, 82, 47–53. [Google Scholar] [CrossRef]
  11. Wu, C.; Wang, Y.; Wang, F. Deep Learning for Ovarian Tumor Classification with Ultrasound Images. In Advances in Multimedia Information Processing–PCM 2018: 19th Pacific-Rim Conference on Multimedia, Hefei, China, 21–22 September 2018; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 395–406. [Google Scholar] [CrossRef]
  12. Bolstad, N.; Øijordsbakken, M.; Nustad, K.; Bjerner, J. Human epididymis protein 4 reference limits and natural variation in a Nordic reference population. Tumor Biol. 2011, 33, 141–148. [Google Scholar] [CrossRef]
  13. Ferraro, S.; Schiumarini, D.; Panteghini, M. Human epididymis protein 4: Factors of variation. Clin. Chim. Acta 2015, 438, 171–177. [Google Scholar] [CrossRef] [PubMed]
  14. Jayson, G.C.; Kohn, E.C.; Kitchener, H.C.; Ledermann, J.A. Ovarian cancer. Lancet 2014, 384, 1376–1388. [Google Scholar] [CrossRef] [PubMed]
  15. Kommoss, S.; Pfisterer, J.; Reuss, A.; Diebold, J.; Hauptmann, S.; Schmidt, C.; Du Bois, A.; Schmidt, D.; Kommoss, F. Specialized Pathology Review in Patients with Ovarian Cancer. Int. J. Gynecol. Cancer 2013, 23, 1376–1382. [Google Scholar] [CrossRef]
  16. Yoshida-Court, K.; Karpinets, T.V.; Mitra, A.; Solley, T.N.; Dorta-Estremera, S.; Sims, T.T.; Delgado Medrano, A.Y.; El Alam, M.B.; Ahmed-Kaddar, M.; Lynn, E.J.; et al. Immune environment and antigen specificity of the T cell receptor repertoire of malignant ascites in ovarian cancer. PLoS ONE 2023, 18, e0279590. [Google Scholar] [CrossRef] [PubMed]
  17. de Leon, A.; Perera, R.; Nittayacharn, P.; Cooley, M.; Jung, O.; Exner, A.A. Ultrasound Contrast Agents and Delivery Systems in Cancer Detection and Therapy. Adv. Cancer Res. 2018, 139, 57–84. [Google Scholar] [CrossRef] [PubMed]
  18. Lusk, J.F.; Miranda, C.; Howell, M.; Chrest, M.; Eshima, J.; Smith, B.S. Photoacoustic Flow System for the Detection of Ovarian Circulating Tumor Cells Utilizing Copper Sulfide Nanoparticles. ACS Biomater. Sci. Eng. 2019, 5, 1553–1560. [Google Scholar] [CrossRef]
  19. Danaee, P.; Ghaeini, R.; Hendrix, D.A. A deep learning approach for cancer detection and relevant gene identification. In Proceedings of the 22nd Pacific Symposium on Biocomputing (PSB), Kohala Coast, HI, USA, 4–8 January 2017; pp. 219–229. [Google Scholar]
  20. Chen, S.-J.; Chang, C.-Y.; Chang, K.-Y.; Tzeng, J.-E.; Chen, Y.-T.; Lin, C.-W.; Hsu, W.-C.; Wei, C.-K. Classification of the Thyroid Nodules Based on Characteristic Sonographic Textural Feature and Correlated Histopathology Using Hierarchical Support Vector Machines. Ultrasound Med. Biol. 2010, 36, 2018–2026. [Google Scholar] [CrossRef]
  21. Acharya, U.R.; Sree, S.V.; Swapna, G.; Gupta, S.; Molinari, F.; Garberoglio, R.; Witkowska, A.; Suri, J.S. Effect of complex wavelet transform filter on thyroid tumor classification in three-dimensional ultrasound. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2013, 227, 284–292. [Google Scholar] [CrossRef]
  22. Katsigiannis, S.; Keramidas, E.G.; Maroulis, D. A Contourlet Transform Feature Extraction Scheme for Ultrasound Thyroid Texture Classification. Int. J. Eng. Intell. Syst. Electr. Eng. Commun. 2010, 18, 171. [Google Scholar]
  23. Chang, C.-Y.; Chen, S.-J.; Tsai, M.-F. Application of support-vector-machine-based method for feature selection and classification of thyroid nodules in ultrasound images. Pattern Recognit. 2010, 43, 3494–3506. [Google Scholar] [CrossRef]
  24. Chang, C.-Y.; Liu, H.-Y.; Tseng, C.-H.; Shih, S.-R. Computer-aided diagnosis for thyroid graves’ disease in ultrasound images. Biomed. Eng. Appl. Basis Commun. 2012, 22, 91–99. [Google Scholar] [CrossRef]
  25. El-Nabawy, A.; El-Bendary, N.; Belal, N.A. Epithelial Ovarian Cancer Stage Subtype Classification using Clinical and Gene Expression Integrative Approach. Procedia Comput. Sci. 2018, 131, 23–30. [Google Scholar] [CrossRef]
  26. Wang, G.; Sun, Y.; Jiang, S.; Wu, G.; Liao, W.; Chen, Y.; Lin, Z.; Liu, Z.; Zhuo, S. Machine learning-based rapid diagnosis of human borderline ovarian cancer on second-harmonic generation images. Biomed. Opt. Express 2021, 12, 5658–5669. [Google Scholar] [CrossRef] [PubMed]
  27. Ma, S.; Sigal, L.; Sclaroff, S. Learning Activity Progression in LSTMs for Activity Detection and Early Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1942–1950. [Google Scholar] [CrossRef]
  28. Aliamiri, A.; Shen, Y. Deep learning based atrial fibrillation detection using wearable photoplethysmography sensor. In Proceedings of the 2018 IEEE EMBS International Conference on Biomedical Health Informatics (BHI), Las Vegas, NV, USA, 4–7 March 2018; pp. 442–445. [Google Scholar]
  29. Sawyer, T.W.; Koevary, J.W.; Rice, F.P.S.; Howard, C.C.; Austin, O.J.; Connolly, D.C.; Cai, K.Q.; Barton, J.K. Quantification of multiphoton and fluorescence images of reproductive tissues from a mouse ovarian cancer model shows promise for early disease detection. J. Biomed. Opt. 2021, 24, 096010. [Google Scholar] [CrossRef]
  30. Liang, Q.; Wendelhag, I.; Wikstrand, J.; Gustavsson, T. A multiscale dynamic programming procedure for boundary detection in ultrasonic artery images. IEEE Trans. Med. Imaging 2000, 19, 127–142. [Google Scholar] [CrossRef]
  31. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  32. Lu, M.Y.; Chen, T.Y.; Williamson, D.F.K.; Zhao, M.; Shady, M.; Lipkova, J.; Mahmood, F. AI-based pathology predicts origins for cancers of unknown primary. Nature 2021, 594, 106–110. [Google Scholar] [CrossRef]
  33. Booma, P.M.; Thiruchelvam, V.; Ting, J.; Ho, S. Max pooling technique to detect and classify medical image for max pooling technique to detect and classify medical image for ovarian cancer diagnosis. Test Eng. Manag. J. 2020, 82, 8423–8442. [Google Scholar]
  34. Wen, B.; Campbell, K.R.; Tilbury, K.; Nadiarnykh, O.; Brewer, M.A.; Patankar, M.; Singh, V.; Eliceiri, K.W.; Campagnola, P.J. 3D texture analysis for classification of second harmonic generation images of human ovarian cancer. Sci. Rep. 2016, 6, 35734. [Google Scholar] [CrossRef]
  35. Huttunen, M.J.; Hassan, A.; McCloskey, C.W.; Fasih, S.; Upham, J.; Vanderhyden, B.C.; Boyd, R.W.; Murugkar, S. Automated classification of multiphoton microscopy images of ovarian tissue using deep learning. J. Biomed. Opt. 2018, 23, 66002. [Google Scholar] [CrossRef] [PubMed]
  36. Wang, C.-W.; Lee, Y.-C.; Chang, C.-C.; Lin, Y.-J.; Liou, Y.-A.; Hsu, P.-C.; Chang, C.-C.; Sai, A.-K.-O.; Wang, C.-H.; Chao, T.-K. A Weakly Supervised Deep Learning Method for Guiding Ovarian Cancer Treatment and Identifying an Effective Biomarker. Cancers 2022, 14, 1651. [Google Scholar] [CrossRef] [PubMed]
  37. Yu, K.-H.; Hu, V.; Wang, F.; Matulonis, U.A.; Mutter, G.L.; Golden, J.A.; Kohane, I.S. Deciphering serous ovarian carcinoma histopathology and platinum response by convolutional neural networks. BMC Med. 2020, 18, 236. [Google Scholar] [CrossRef] [PubMed]
  38. Sengupta, D.; Ali, S.N.; Bhattacharya, A.; Mustafi, J.; Mukhopadhyay, A.; Sengupta, K. Nuclear Morphology Optimized Deep Hybrid Learning (NUMODRIL): A novel architecture for accurate diagnosis/prognosis of Ovarian Cancer. bioRxiv 2020. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Gong, C.; Zheng, L.; Li, X.; Yang, X. Deep Learning for Intelligent Recognition and Prediction of Endometrial Cancer. J. Health Eng. 2021, 2021, 1148309. [Google Scholar] [CrossRef] [PubMed]
  40. Liao, Q.; Ding, Y.; Jiang, Z.L.; Wang, X.; Zhang, C.; Zhang, Q. Multi-task deep convolutional neural network for cancer diagnosis. Neurocomputing 2019, 348, 66–73. [Google Scholar] [CrossRef]
  41. Guo, L.-Y.; Wu, A.-H.; Wang, Y.-X.; Zhang, L.-P.; Chai, H.; Liang, X.-F. Deep learning-based ovarian cancer subtypes identification using multi-omics data. BioData Min. 2020, 13, 10. [Google Scholar] [CrossRef]
  42. Kasture, K.R.; Shah, D.D.; Matte, P.N. Research Article A New Deep Learning method for Automatic Ovarian Cancer Prediction & Subtype classification. Turk. J. Comput. Math. Educ. Res. Artic. 2021, 12, 1233–1242. [Google Scholar]
  43. Kavitha, S.; Vidyaathulasiraman. Identification and classification of early stage Ovarian cancer using convolutional neural network. Ilkogr. Online-Elem. Educ. Online 2021, 20, 1908–1924. [Google Scholar] [CrossRef]
  44. Ghoniem, R.M.; Algarni, A.D.; Refky, B.; Ewees, A.A. Multi-Modal Evolutionary Deep Learning Model for Ovarian Cancer Diagnosis. Symmetry 2021, 13, 643. [Google Scholar] [CrossRef]
  45. Xiao, Y.; Bi, M.; Guo, H.; Li, M. Multi-omics approaches for biomarker discovery in early ovarian cancer diagnosis. Ebiomedicine 2022, 79, 104001. [Google Scholar] [CrossRef] [PubMed]
  46. Woman Ovarian Cancer Figure. Available online: https://ars.els-cdn.com/content/image/1-s2.0-S2352396422001852-gr1_lrg.jpg (accessed on 13 April 2023).
  47. Machine Learning Technology for Biomarker Development Figure. Available online: https://ars.els-cdn.com/content/image/1-s2.0-S2352396422001852-gr2.jpg (accessed on 13 April 2023).
  48. Arezzo, F.; Loizzi, V.; La Forgia, D.; Moschetta, M.; Tagliafico, A.S.; Cataldo, V.; Kawosha, A.A.; Venerito, V.; Cazzato, G.; Ingravallo, G.; et al. Radiomics Analysis in Ovarian Cancer: A Narrative Review. Appl. Sci. 2021, 11, 7833. [Google Scholar] [CrossRef]
  49. Arezzo, F.; Cormio, G.; La Forgia, D.; Kawosha, A.A.; Mongelli, M.; Putino, C.; Silvestris, E.; Oreste, D.; Lombardi, C.; Cazzato, G.; et al. The Application of Sonovaginography for Implementing Ultrasound Assessment of Endometriosis and Other Gynaecological Diseases. Diagnostics 2022, 12, 820. [Google Scholar] [CrossRef]
  50. Reilly, G.P.; Dunton, C.J.; Bullock, R.G.; Ure, D.R.; Fritsche, H.; Ghosh, S.; Pappas, T.C.; Phan, R.T. Validation of a deep neural network-based algorithm supporting clinical management of adnexal mass. Front. Med. 2023, 10, 1–11. [Google Scholar] [CrossRef] [PubMed]
  51. Elyan, E.; Vuttipittayamongkol, P.; Johnston, P.; Martin, K.; McPherson, K.; Moreno-García, C.F.; Jayne, C.; Sarker, M.K. Computer vision and machine learning for medical image analysis: Recent advances, challenges, and way forward. Artif. Intell. Surg. 2022, 2, 24–45. [Google Scholar] [CrossRef]
  52. Gumbs, A.A.; Frigerio, I.; Spolverato, G.; Croner, R.; Illanes, A.; Chouillard, E.; Elyan, E. Artificial Intelligence Surgery: How Do We Get to Autonomous Actions in Surgery? Sensors 2021, 21, 5526. [Google Scholar] [CrossRef]
  53. Sarvamangala, D.R.; Kulkarni, R.V. Convolutional neural networks in medical image understanding: A survey. Evol. Intell. 2022, 15, 1–22. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.