Next Article in Journal
An Intelligent and Collaborative Multiagent System in a 3D Environment
Previous Article in Journal
Data Extraction in Insurance Photo-Inspections Using Computer Vision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Deep Image Segmentation for Breast Keypoint Detection †

1
INESC TEC and Faculdade de Engenharia, Universidade do Porto, 4200-465 Porto, Portugal
2
Champalimaud Foundation and Nova Medical School, 1400-038 Lisboa, Portugal
*
Author to whom correspondence should be addressed.
Presented at the 3rd XoveTIC Conference, A Coruña, Spain, 8–9 October 2020.
Proceedings 2020, 54(1), 35; https://doi.org/10.3390/proceedings2020054035
Published: 21 August 2020
(This article belongs to the Proceedings of 3rd XoveTIC Conference)

Abstract

:
The main aim of breast cancer conservative treatment is the optimisation of the aesthetic outcome and, implicitly, women’s quality of life, without jeopardising local cancer control and overall survival. Moreover, there has been an effort to try to define an optimal tool capable of performing the aesthetic evaluation of breast cancer conservative treatment outcomes. Recently, a deep learning algorithm that integrates the learning of keypoints’ probability maps in the loss function as a regularisation term for the robust learning of the keypoint localisation has been proposed. However, it achieves the best results when used in cooperation with a shortest-path algorithm that models images as graphs. In this work, we analysed a novel algorithm based on the interaction of deep image segmentation and deep keypoint detection models capable of improving both state-of-the-art performance and execution-time on the breast keypoint detection task.

1. Introduction

Breast cancer is a highly mutable and rapidly evolving disease, however, thanks to the generalised use of breast cancer screening and better treatments, approximately 90% of the cases can be cured [1,2]. Therefore, it is now possible to employ breast cancer conservative treatment (BCCT) approaches, which only require the removal of cancerous tissue with a rim of healthy tissue, instead of radical mastectomy-based approaches, which require the removal of the entire breast and posterior breast reconstruction [3]. In both cases, it is possible to obtain good cosmetic results, and consequently, improve patients’ quality of life. The objective assessment of the cosmetic results, which acts as a reliable proxy of the quality of the treatment and valuable input to the improvement of current techniques, is performed through the analysis of digital photographs, from which, several features are extracted and used to train a classification algorithm [4]. To facilitate the extraction of such features, the annotation of several breast keypoints is required. Several semi-automatic methods to perform this keypoint annotation are already available, however, they still need the input from the user and are time-consuming and computationally demanding. Recently, Silva et al. showed that deep learning algorithms may be part of the answer. They introduced a deep algorithm based deep neural networks (DNN) which receives an image as input and returns the coordinates of the breast keypoints as output [5], which are then given to a shortest-path algorithm that models images as graphs to refine breast keypoint localisation. Although this increased the performance of the task of breast keypoint annotation, this is still computationally complex. To overcome this issue, we proposed a novel deep keypoint detection algorithm, which combines the approach by Silva et al. together with deep image segmentation model that refines breast keypoint localisation in lesser time, and with improved precision [6].

2. Materials and Methods

Our approach combines the DNN proposed by Silva et al. and a deep image segmentation model, U-Net++ [7], in a pipeline (see Figure 1). The intuition behind this approach is that it is easier to detect breast contours if one is capable to detect breasts first. We started by the training of both the U-Net++ and the DNN by Silva et al. in the breast detection (i.e., generate breast masks) and in the keypoint detection tasks, respectively. The complete pipeline works as follows: the image is given as input to U-Net++, which returns a breast segmentation mask; from this mask, contours are extracted using the marching squares algorithm [8]; the image is given as input to the DNN by Silva et al., which returns a set of breast keypoints; the refined localisation of breast keypoints is obtained with the projection of this set onto the breast segmentation mask contours through the minimisation of the Euclidean Distance of a given breast keypoint and a given contour (processing step). We also trained both the DNN and the complete keypoint method proposed by Silva et al. for comparison purposes. All the experiments were performed taking into account 5-fold cross-validation. In addition to the study of algorithms’ performance, a study on the algorithms’ execution time, in seconds, was also done, regarding the interest in the deployment of these algorithms into a web-application for both the research and medical communities.

3. Results

Table 1 presents the average error distance (measured in pixels) and the average execution time (measured in seconds) of each model inference on the test set. Figure 2 shows a visual example of the obtained results.

4. Discussion

Our keypoint detection method surpassed both the DNN-based keypoint detection and the keypoint detection method from Silva et al. in the endpoints and breast contours detection tasks, which were, to our knowledge, the state-of-the-art breast keypoint detection algorithms. Besides, this novel algorithm achieves lower values of standard deviation and maximum error, which suggests more consistency when compared with the other two. Regarding the study of performance, our keypoint detection method presents the best balance between time-efficiency and accuracy, being the most accurate model, with a time efficiency comparable to the most time-efficient method.

5. Conclusions

In this work, we proposed keypoint detection method that combines a deep keypoint detection and deep image segmentation models, and capable of achieving a good balance between both performance and execution-time. Further studies should be focused on the development of a fully end-to-end deep keypoint detection model, trained with a multi-term loss function, and on the deployment of these breast keypoint detection algorithms into a fully-functional web-application for both the research and medical communities.

Author Contributions

Conceptualization, T.G., W.S. and J.S.C.; methodology, T.G., W.S. and J.S.C.; software, T.G. and W.S.; validation, J.S.C.; formal analysis, J.S.C.; investigation, T.G., W.S., M.J.C. and J.S.C.; resources, M.J.C. and J.S.C.; data curation, M.J.C. and J.S.C.; writing—original draft preparation, T.G.; writing—review and editing, W.S. and J.S.C.; visualization, M.J.C. and J.S.C.; supervision, J.S.C.; project administration, J.S.C.; funding acquisition, J.S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia within project “POCI-01-0145-FEDER-028857” and PhD grant number SFRH/BD/139468/2018.

Acknowledgments

The authors thank Florian Fitzal for sharing the VIENNA dataset.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Oliveira, H.P.; Cardoso, J.S.; Magalhaes, A.; Cardoso, M.J. Methods for the Aesthetic Evaluation of Breast Cancer Conservation Treatment: A Technological Review. Curr. Med Imaging Rev. 2013, 9, 32–46. [Google Scholar] [CrossRef]
  2. Wilkes, G.M.; Barton-Burke, M. 2018 Oncology Nursing Drug Handbook; OCLC: 1030285714; Jones & Bartlett Learning: Burlington, MA, USA, 2018. [Google Scholar]
  3. Street, W. Cancer Facts & Figures; American Cancer Society: Atlanta, GA, USA, 2018; p. 76. [Google Scholar]
  4. Cardoso, J.S.; Cardoso, M.J. Towards an intelligent medical system for the aesthetic evaluation of breast cancer conservative treatment. Artif. Intell. Med. 2007, 40, 115–126. [Google Scholar] [CrossRef] [PubMed]
  5. Silva, W.; Castro, E.; Cardoso, M.J.; Fitzal, F.; Cardoso, J.S. Deep Keypoint Detection for the Aesthetic Evaluation of Breast Cancer Surgery Outcomes. In Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI’19), Venice, Italy, 8–11 April 2019. [Google Scholar]
  6. Gonçalves, T.; Silva, W.; Cardoso, M.J.; Cardoso, J.S. A novel approach to keypoint detection for the aesthetic evaluation of breast cancer surgery outcomes. In Health and Technology; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–13. [Google Scholar]
  7. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Stoyanov, D., Taylor, Z., Carneiro, G., Syeda-Mahmood, T., Martel, A., Maier-Hein, L., Tavares, J.M.R., Bradley, A., Papa, J.P., Belagiannis, V., et al., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 11045, pp. 3–11. [Google Scholar] [CrossRef]
  8. Lorensen, W.E.; Cline, H.E. Marching cubes: A high resolution 3D surface construction algorithm. In Proceedings of the 14th Annual Conference on Computer GRAPHICS and interactive Techniques—SIGGRAPH ’87; ACM Press: New York, NY, USA, 1987; pp. 163–169. [Google Scholar] [CrossRef]
Figure 1. Proposed deep image segmentation keypoint detection method [6].
Figure 1. Proposed deep image segmentation keypoint detection method [6].
Proceedings 54 00035 g001
Figure 2. Example of the results obtained with the proposed deep image segmentation method. The first image is the input photograph and the second image is the U-Net++ predicted mask with the detected breast keypoints (after the processing step).
Figure 2. Example of the results obtained with the proposed deep image segmentation method. The first image is the input photograph and the second image is the U-Net++ predicted mask with the detected breast keypoints (after the processing step).
Proceedings 54 00035 g002
Table 1. Average error distance for endpoints, breast contours and nipples, measured in pixels and average execution time of the models’ inferences. Best results are highlighted in bold. Note: STD stands for standard deviation and Max stands for maximum error.
Table 1. Average error distance for endpoints, breast contours and nipples, measured in pixels and average execution time of the models’ inferences. Best results are highlighted in bold. Note: STD stands for standard deviation and Max stands for maximum error.
ModelEndpointsBreast ContoursNipplesExecution Time (s)
MeanSTDMaxMeanSTDMaxMeanSTDMax
Silva et al. Keypoint Detection DNN4033182218727039218150
Silva et al. Keypoint Detection Method4033182131410470392181704
Proposed Keypoint Detection Method3834195115347039218280
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gonçalves, T.; Silva, W.; Cardoso, M.J.; Cardoso, J.S. Deep Image Segmentation for Breast Keypoint Detection. Proceedings 2020, 54, 35. https://doi.org/10.3390/proceedings2020054035

AMA Style

Gonçalves T, Silva W, Cardoso MJ, Cardoso JS. Deep Image Segmentation for Breast Keypoint Detection. Proceedings. 2020; 54(1):35. https://doi.org/10.3390/proceedings2020054035

Chicago/Turabian Style

Gonçalves, Tiago, Wilson Silva, Maria J. Cardoso, and Jaime S. Cardoso. 2020. "Deep Image Segmentation for Breast Keypoint Detection" Proceedings 54, no. 1: 35. https://doi.org/10.3390/proceedings2020054035

APA Style

Gonçalves, T., Silva, W., Cardoso, M. J., & Cardoso, J. S. (2020). Deep Image Segmentation for Breast Keypoint Detection. Proceedings, 54(1), 35. https://doi.org/10.3390/proceedings2020054035

Article Metrics

Back to TopTop