Artificial Intelligence in Breast Cancer Screening

A special issue of Tomography (ISSN 2379-139X). This special issue belongs to the section "Cancer Imaging".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 35519

Special Issue Editors


E-Mail Website
Guest Editor
Mallinckrodt Institute of Radiology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA
Interests: computational imaging phenotypes; artificial intelligence; radiomics; deep learning; breast cancer risk

E-Mail Website
Guest Editor
Mallinckrodt Institute of Radiology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA
Interests: breast cancer screening and diagnosis; mammography; breast ultrasound; breast intervention; clinical trials for breast cancer

Special Issue Information

Dear Colleagues,

Most developed healthcare systems have implemented breast cancer screening programs, initially using analog screen-film-based mammography systems and, over the last 20 years, transitioning to the use of fully digital systems (digital mammography and digital breast tomosynthesis). Much of the effort to improve breast cancer screening outcomes has focused on intensifying screening, e.g., double-reading instead of single-reading and more frequent or supplemental screening (with breast ultrasound or MRI), which entail increased resources and often come at a cost of higher false-positive rates. Furthermore, personalized breast cancer screening regimens tailored to an individual’s breast cancer risk are increasingly being advocated. The artificial intelligence (AI) revolution in computational imaging, driven by radiomic machine learning and more recently by deep learning, has also pervaded this complex landscape of breast cancer screening, including AI models for breast density evaluation, breast cancer risk assessment, breast cancer detection and prognosis, as well as enhancing efficiency in breast cancer care.

Therefore, it is with pleasure that we invite investigators to contribute to this Special Issue with original research articles, review articles, and meta-analysis articles addressing these topics, with special regard to their clinical and radiological implications for breast cancer screening.

Dr. Aimilia Gastounioti
Dr. Debbie Bennett
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Tomography is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • radiomics
  • machine learning
  • breast cancer
  • breast cancer risk
  • digital mammography
  • breast tomosynthesis
  • breast MRI
  • breast ultrasound

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 1048 KiB  
Article
Artificial Intelligence for Image-Based Breast Cancer Risk Prediction Using Attention
by Stepan Romanov, Sacha Howell, Elaine Harkness, Megan Bydder, D. Gareth Evans, Steven Squires, Martin Fergie and Sue Astley
Tomography 2023, 9(6), 2103-2115; https://doi.org/10.3390/tomography9060165 - 24 Nov 2023
Cited by 2 | Viewed by 2911
Abstract
Accurate prediction of individual breast cancer risk paves the way for personalised prevention and early detection. The incorporation of genetic information and breast density has been shown to improve predictions for existing models, but detailed image-based features are yet to be included despite [...] Read more.
Accurate prediction of individual breast cancer risk paves the way for personalised prevention and early detection. The incorporation of genetic information and breast density has been shown to improve predictions for existing models, but detailed image-based features are yet to be included despite correlating with risk. Complex information can be extracted from mammograms using deep-learning algorithms, however, this is a challenging area of research, partly due to the lack of data within the field, and partly due to the computational burden. We propose an attention-based Multiple Instance Learning (MIL) model that can make accurate, short-term risk predictions from mammograms taken prior to the detection of cancer at full resolution. Current screen-detected cancers are mixed in with priors during model development to promote the detection of features associated with risk specifically and features associated with cancer formation, in addition to alleviating data scarcity issues. MAI-risk achieves an AUC of 0.747 [0.711, 0.783] in cancer-free screening mammograms of women who went on to develop a screen-detected or interval cancer between 5 and 55 months, outperforming both IBIS (AUC 0.594 [0.557, 0.633]) and VAS (AUC 0.649 [0.614, 0.683]) alone when accounting for established clinical risk factors. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

13 pages, 4548 KiB  
Article
Dedicated Cone-Beam Breast CT: Reproducibility of Volumetric Glandular Fraction with Advanced Image Reconstruction Methods
by Srinivasan Vedantham, Hsin Wu Tseng, Zhiyang Fu and Hsiao-Hui Sherry Chow
Tomography 2023, 9(6), 2039-2051; https://doi.org/10.3390/tomography9060160 - 2 Nov 2023
Viewed by 3180
Abstract
Dedicated cone-beam breast computed tomography (CBBCT) is an emerging modality and provides fully three-dimensional (3D) images of the uncompressed breast at an isotropic voxel resolution. In an effort to translate this modality to breast cancer screening, advanced image reconstruction methods are being pursued. [...] Read more.
Dedicated cone-beam breast computed tomography (CBBCT) is an emerging modality and provides fully three-dimensional (3D) images of the uncompressed breast at an isotropic voxel resolution. In an effort to translate this modality to breast cancer screening, advanced image reconstruction methods are being pursued. Since radiographic breast density is an established risk factor for breast cancer and CBBCT provides volumetric data, this study investigates the reproducibility of the volumetric glandular fraction (VGF), defined as the proportion of fibroglandular tissue volume relative to the total breast volume excluding the skin. Four image reconstruction methods were investigated: the analytical Feldkamp–Davis–Kress (FDK), a compressed sensing-based fast, regularized, iterative statistical technique (FRIST), a fully supervised deep learning approach using a multi-scale residual dense network (MS-RDN), and a self-supervised approach based on Noise-to-Noise (N2N) learning. Projection datasets from 106 women who participated in a prior clinical trial were reconstructed using each of these algorithms at a fixed isotropic voxel size of (0.273 mm3). Each reconstructed breast volume was segmented into skin, adipose, and fibroglandular tissues, and the VGF was computed. The VGF did not differ among the four reconstruction methods (p = 0.167), and none of the three advanced image reconstruction algorithms differed from the standard FDK reconstruction (p > 0.862). Advanced reconstruction algorithms developed for low-dose CBBCT reproduce the VGF to provide quantitative breast density, which can be used for risk estimation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

12 pages, 1826 KiB  
Article
Impact of Tomosynthesis Acquisition on 3D Segmentations of Breast Outline and Adipose/Dense Tissue with AI: A Simulation-Based Study
by Bruno Barufaldi, Jordy Gomes, Thais G. do Rego, Yuri Malheiros, Telmo M. Silva Filho, Lucas R. Borges, Raymond J. Acciavatti, Suleman Surti and Andrew D. A. Maidment
Tomography 2023, 9(4), 1303-1314; https://doi.org/10.3390/tomography9040103 - 3 Jul 2023
Cited by 1 | Viewed by 1697
Abstract
Digital breast tomosynthesis (DBT) reconstructions introduce out-of-plane artifacts and false-tissue boundaries impacting the dense/adipose and breast outline (convex hull) segmentations. A virtual clinical trial method was proposed to segment both the breast tissues and the breast outline in DBT reconstructions. The DBT images [...] Read more.
Digital breast tomosynthesis (DBT) reconstructions introduce out-of-plane artifacts and false-tissue boundaries impacting the dense/adipose and breast outline (convex hull) segmentations. A virtual clinical trial method was proposed to segment both the breast tissues and the breast outline in DBT reconstructions. The DBT images of a representative population were simulated using three acquisition geometries: a left–right scan (conventional, I), a two-directional scan in the shape of a “T” (II), and an extra-wide range (XWR, III) left–right scan at a six-times higher dose than I. The nnU-Net was modified including two losses for segmentation: (1) tissues and (2) breast outline. The impact of loss (1) and the combination of loss (1) and (2) was evaluated using models trained with data simulating geometry I. The impact of the geometry was evaluated using the combined loss (1&2). The loss (1&2) improved the convex hull estimates, resolving 22.2% of the false classification of air voxels. Geometry II was superior to I and III, resolving 99.1% and 96.8% of the false classification of air voxels. Geometry III (Dice = (0.98, 0.94)) was superior to I (0.92, 0.78) and II (0.93, 0.74) for the tissue segmentation (adipose, dense, respectively). Thus, the loss (1&2) provided better segmentation, and geometries T and XWR improved the dense/adipose and breast outline segmentations relative to the conventional scan. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

13 pages, 2984 KiB  
Article
Multiclass Segmentation of Breast Tissue and Suspicious Findings: A Simulation-Based Study for the Development of Self-Steering Tomosynthesis
by Bruno Barufaldi, Yann N. G. da Nobrega, Giulia Carvalhal, Joao P. V. Teixeira, Telmo M. Silva Filho, Thais G. do Rego, Yuri Malheiros, Raymond J. Acciavatti and Andrew D. A. Maidment
Tomography 2023, 9(3), 1120-1132; https://doi.org/10.3390/tomography9030092 - 10 Jun 2023
Viewed by 1675
Abstract
In breast tomosynthesis, multiple low-dose projections are acquired in a single scanning direction over a limited angular range to produce cross-sectional planes through the breast for three-dimensional imaging interpretation. We built a next-generation tomosynthesis system capable of multidirectional source motion with the intent [...] Read more.
In breast tomosynthesis, multiple low-dose projections are acquired in a single scanning direction over a limited angular range to produce cross-sectional planes through the breast for three-dimensional imaging interpretation. We built a next-generation tomosynthesis system capable of multidirectional source motion with the intent to customize scanning motions around “suspicious findings”. Customized acquisitions can improve the image quality in areas that require increased scrutiny, such as breast cancers, architectural distortions, and dense clusters. In this paper, virtual clinical trial techniques were used to analyze whether a finding or area at high risk of masking cancers can be detected in a single low-dose projection and thus be used for motion planning. This represents a step towards customizing the subsequent low-dose projection acquisitions autonomously, guided by the first low-dose projection; we call this technique “self-steering tomosynthesis.” A U-Net was used to classify the low-dose projections into “risk classes” in simulated breasts with soft-tissue lesions; class probabilities were modified using post hoc Dirichlet calibration (DC). DC improved the multiclass segmentation (Dice = 0.43 vs. 0.28 before DC) and significantly reduced false positives (FPs) from the class of the highest risk of masking (sensitivity = 81.3% at 2 FPs per image vs. 76.0%). This simulation-based study demonstrated the feasibility of identifying suspicious areas using a single low-dose projection for self-steering tomosynthesis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

15 pages, 6640 KiB  
Article
Improving Performance of Breast Lesion Classification Using a ResNet50 Model Optimized with a Novel Attention Mechanism
by Warid Islam, Meredith Jones, Rowzat Faiz, Negar Sadeghipour, Yuchen Qiu and Bin Zheng
Tomography 2022, 8(5), 2411-2425; https://doi.org/10.3390/tomography8050200 - 28 Sep 2022
Cited by 26 | Viewed by 5755
Abstract
Background: The accurate classification between malignant and benign breast lesions detected on mammograms is a crucial but difficult challenge for reducing false-positive recall rates and improving the efficacy of breast cancer screening. Objective: This study aims to optimize a new deep transfer learning [...] Read more.
Background: The accurate classification between malignant and benign breast lesions detected on mammograms is a crucial but difficult challenge for reducing false-positive recall rates and improving the efficacy of breast cancer screening. Objective: This study aims to optimize a new deep transfer learning model by implementing a novel attention mechanism in order to improve the accuracy of breast lesion classification. Methods: ResNet50 is selected as the base model to develop a new deep transfer learning model. To enhance the accuracy of breast lesion classification, we propose adding a convolutional block attention module (CBAM) to the standard ResNet50 model and optimizing a new model for this task. We assembled a large dataset with 4280 mammograms depicting suspicious soft-tissue mass-type lesions. A region of interest (ROI) is extracted from each image based on lesion center. Among them, 2480 and 1800 ROIs depict verified benign and malignant lesions, respectively. The image dataset is randomly split into two subsets with a ratio of 9:1 five times to train and test two ResNet50 models with and without using CBAM. Results: Using the area under ROC curve (AUC) as an evaluation index, the new CBAM-based ResNet50 model yields AUC = 0.866 ± 0.015, which is significantly higher than that obtained by the standard ResNet50 model (AUC = 0.772 ± 0.008) (p < 0.01). Conclusion: This study demonstrates that although deep transfer learning technology attracted broad research interest in medical-imaging informatic fields, adding a new attention mechanism to optimize deep transfer learning models for specific application tasks can play an important role in further improving model performances. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

11 pages, 2290 KiB  
Article
Deep Learning Using Multiple Degrees of Maximum-Intensity Projection for PET/CT Image Classification in Breast Cancer
by Kanae Takahashi, Tomoyuki Fujioka, Jun Oyama, Mio Mori, Emi Yamaga, Yuka Yashima, Tomoki Imokawa, Atsushi Hayashi, Yu Kujiraoka, Junichi Tsuchiya, Goshi Oda, Tsuyoshi Nakagawa and Ukihide Tateishi
Tomography 2022, 8(1), 131-141; https://doi.org/10.3390/tomography8010011 - 5 Jan 2022
Cited by 16 | Viewed by 4857
Abstract
Deep learning (DL) has become a remarkably powerful tool for image processing recently. However, the usefulness of DL in positron emission tomography (PET)/computed tomography (CT) for breast cancer (BC) has been insufficiently studied. This study investigated whether a DL model using images with [...] Read more.
Deep learning (DL) has become a remarkably powerful tool for image processing recently. However, the usefulness of DL in positron emission tomography (PET)/computed tomography (CT) for breast cancer (BC) has been insufficiently studied. This study investigated whether a DL model using images with multiple degrees of PET maximum-intensity projection (MIP) images contributes to increase diagnostic accuracy for PET/CT image classification in BC. We retrospectively gathered 400 images of 200 BC and 200 non-BC patients for training data. For each image, we obtained PET MIP images with four different degrees (0°, 30°, 60°, 90°) and made two DL models using Xception. One DL model diagnosed BC with only 0-degree MIP and the other used four different degrees. After training phases, our DL models analyzed test data including 50 BC and 50 non-BC patients. Five radiologists interpreted these test data. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. Our 4-degree model, 0-degree model, and radiologists had a sensitivity of 96%, 82%, and 80–98% and a specificity of 80%, 88%, and 76–92%, respectively. Our 4-degree model had equal or better diagnostic performance compared with that of the radiologists (AUC = 0.936 and 0.872–0.967, p = 0.036–0.405). A DL model similar to our 4-degree model may lead to help radiologists in their diagnostic work in the future. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

Review

Jump to: Research

10 pages, 440 KiB  
Review
Deep Learning Approaches with Digital Mammography for Evaluating Breast Cancer Risk, a Narrative Review
by Maham Siddique, Michael Liu, Phuong Duong, Sachin Jambawalikar and Richard Ha
Tomography 2023, 9(3), 1110-1119; https://doi.org/10.3390/tomography9030091 - 6 Jun 2023
Cited by 3 | Viewed by 2718
Abstract
Breast cancer remains the leading cause of cancer-related deaths in women worldwide. Current screening regimens and clinical breast cancer risk assessment models use risk factors such as demographics and patient history to guide policy and assess risk. Applications of artificial intelligence methods (AI) [...] Read more.
Breast cancer remains the leading cause of cancer-related deaths in women worldwide. Current screening regimens and clinical breast cancer risk assessment models use risk factors such as demographics and patient history to guide policy and assess risk. Applications of artificial intelligence methods (AI) such as deep learning (DL) and convolutional neural networks (CNNs) to evaluate individual patient information and imaging showed promise as personalized risk models. We reviewed the current literature for studies related to deep learning and convolutional neural networks with digital mammography for assessing breast cancer risk. We discussed the literature and examined the ongoing and future applications of deep learning techniques in breast cancer risk modeling. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

19 pages, 3961 KiB  
Review
A Review of Computer-Aided Breast Cancer Diagnosis Using Sequential Mammograms
by Kosmia Loizidou, Galateia Skouroumouni, Christos Nikolaou and Costas Pitris
Tomography 2022, 8(6), 2874-2892; https://doi.org/10.3390/tomography8060241 - 6 Dec 2022
Cited by 2 | Viewed by 4839
Abstract
Radiologists assess the results of mammography, the key screening tool for the detection of breast cancer, to determine the presence of malignancy. They, routinely, compare recent and prior mammographic views to identify changes between the screenings. In case a new lesion appears in [...] Read more.
Radiologists assess the results of mammography, the key screening tool for the detection of breast cancer, to determine the presence of malignancy. They, routinely, compare recent and prior mammographic views to identify changes between the screenings. In case a new lesion appears in a mammogram, or a region is changing rapidly, it is more likely to be suspicious, compared to a lesion that remains unchanged and it is usually benign. However, visual evaluation of mammograms is challenging even for expert radiologists. For this reason, various Computer-Aided Diagnosis (CAD) algorithms are being developed to assist in the diagnosis of abnormal breast findings using mammograms. Most of the current CAD systems do so using only the most recent mammogram. This paper provides a review of the development of methods to emulate the radiological approach and perform automatic segmentation and/or classification of breast abnormalities using sequential mammogram pairs. It begins with demonstrating the importance of utilizing prior views in mammography, through the review of studies where the performance of expert and less-trained radiologists was compared. Following, image registration techniques and their application to mammography are presented. Subsequently, studies that implemented temporal analysis or subtraction of temporally sequential mammograms are summarized. Finally, a description of the open access mammography datasets is provided. This comprehensive review can serve as a thorough introduction to the use of prior information in breast cancer CAD systems but also provides indicative directions to guide future applications. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

12 pages, 908 KiB  
Review
Deep Learning Prediction of Pathologic Complete Response in Breast Cancer Using MRI and Other Clinical Data: A Systematic Review
by Nabeeha Khan, Richard Adam, Pauline Huang, Takouhie Maldjian and Tim Q. Duong
Tomography 2022, 8(6), 2784-2795; https://doi.org/10.3390/tomography8060232 - 21 Nov 2022
Cited by 11 | Viewed by 4782
Abstract
Breast cancer patients who have pathological complete response (pCR) to neoadjuvant chemotherapy (NAC) are more likely to have better clinical outcomes. The ability to predict which patient will respond to NAC early in the treatment course is important because it could help to [...] Read more.
Breast cancer patients who have pathological complete response (pCR) to neoadjuvant chemotherapy (NAC) are more likely to have better clinical outcomes. The ability to predict which patient will respond to NAC early in the treatment course is important because it could help to minimize unnecessary toxic NAC and to modify regimens mid-treatment to achieve better efficacy. Machine learning (ML) is increasingly being used in radiology and medicine because it can identify relationships amongst complex data elements to inform outcomes without the need to specify such relationships a priori. One of the most popular deep learning methods that applies to medical images is the Convolutional Neural Networks (CNN). In contrast to supervised ML, deep learning CNN can operate on the whole images without requiring radiologists to manually contour the tumor on images. Although there have been many review papers on supervised ML prediction of pCR, review papers on deep learning prediction of pCR are sparse. Deep learning CNN could also incorporate multiple image types, clinical data such as demographics and molecular subtypes, as well as data from multiple treatment time points to predict pCR. The goal of this study is to perform a systematic review of deep learning methods that use whole-breast MRI images without annotation or tumor segmentation to predict pCR in breast cancer. Full article
(This article belongs to the Special Issue Artificial Intelligence in Breast Cancer Screening)
Show Figures

Figure 1

Back to TopTop