Next Article in Journal
Intraoperative Evaluation of Brain-Tumor Microvascularization through MicroV IOUS: A Protocol for Image Acquisition and Analysis of Radiomic Features
Next Article in Special Issue
Integrated Design of Optimized Weighted Deep Feature Fusion Strategies for Skin Lesion Image Classification
Previous Article in Journal
Human Papillomavirus Oncoproteins Confer Sensitivity to Cisplatin by Interfering with Epidermal Growth Factor Receptor Nuclear Trafficking Related to More Favorable Clinical Survival Outcomes in Non-Small Cell Lung Cancer
Previous Article in Special Issue
A Deep Learning-Aided Automated Method for Calculating Metabolic Tumor Volume in Diffuse Large B-Cell Lymphoma
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review

Mohammad Madani
Mohammad Mahdi Behzadi
1,2 and
Sheida Nabavi
Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
Author to whom correspondence should be addressed.
Cancers 2022, 14(21), 5334;
Submission received: 1 October 2022 / Revised: 23 October 2022 / Accepted: 25 October 2022 / Published: 29 October 2022



Simple Summary

Breast cancer is the most common cancer, which resulted in the death of 700,000 people around the world in 2020. Various imaging modalities have been utilized to detect and analyze breast cancer. However, the manual detection of cancer from large-size images produced by these imaging modalities is usually time-consuming and can be inaccurate. Early and accurate detection of breast cancer plays a critical role in improving the prognosis bringing the patient survival rate to 50%. Recently, some artificial-intelligence-based approaches such as deep learning algorithms have shown remarkable advancements in early breast cancer diagnosis. This review focuses first on the introduction of various breast cancer imaging modalities and their available public datasets, then on proposing the most recent studies considering deep-learning-based models for breast cancer analysis. This study systemically summarizes various imaging modalities, relevant public datasets, deep learning architectures used for different imaging modalities, model performances for different tasks such as classification and segmentation, and research directions.


Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.

1. Introduction

Breast cancer is the second most fatal disease in women and is a leading cause of death for millions of women around the world [1]. According to the American Cancer Society, approximately 20% of women who have been diagnosed with breast cancer die [2,3]. Generally, breast tumors are divided into four groups: normal, benign, in situ carcinoma, and invasive carcinoma [1]. A benign tumor is an abnormal but noncancerous collection of cells in which minor changes in the structure of cells happen, and they cannot be considered cancerous cells [1]. However, in situ carcinoma and invasive carcinoma are classified as cancer [4]. In situ carcinoma remains in its organ and does not affect other organs. On the other hand, invasive carcinoma spreads to surrounding organs and causes the development of many cancerous cells in the organs [5,6]. Early detection of breast cancer is a determinative step for treatment and is critical to avoiding further advancement of cancer and its complications [7]. There are several well-known imaging modalities to detect and treat breast cancer at an early stage including mammograms (MM) [8], breast thermography (BTD) [9], magnetic resonance imaging (MRI) [10], positron emission tomography (PET) [11], computed tomography (CT) [11], ultrasound (US) [12], and histopathology (HP) [13]. Among these modalities, mammograms (MMs) and histopathology (HP), which involve image analysis of the removed tissue stained with hematoxylin and eosin to increase visibility, are widely used [14,15]. Mammography tries to filter a large-scale population for initial breast cancer symptoms, while histopathology tries to capture microscopic images with the highest possible resolution to find exact cancerous tissues at the molecular level [16,17]. In practice for breast cancer screening, radiologists or pathologists observe and examine breast images manually for diagnosis, prognosis, and treatment decisions [7]. Such screening usually leads to over- or under-treatment because of inaccurate detection, resulting in a prolonged diagnosis process [18]. It is worth noting that only 0.6% to 0.7% of cancer detections in women during the screening are validated and 15–35% of cancer screening fails due to errors related to the imaging process, quality of images, and human fatigue [19,20,21]. Several decades ago, computer-aided detection (CAD) systems were first employed to assist radiologists in their decision-making. CAD systems generally analyze imaging data and other cancer-related data alone or in combination with other clinical information [22]. Additionally, based on the statistical models, CADs can provide results about the probability of diseases such as breast cancer [23]. CAD systems have been widely used to help radiologists in patient care processes such as cancer staging [23]. However, conventional CAD systems, which are based on traditional image processing techniques, have been limited in their utility and capability.
To tackle these problems and enhance efficiency as well as decrease false cancer detection rates, precise automated methods are needed to complement the work of humans or replace them. AI is one of the most effective approaches capturing much attention in analyzing medical imaging, especially for the automated analysis and extraction of relevant information from imaging modalities such as MMs and HPs [24,25]. Many available AI-based tools for image recognition to detect breast cancer have exhibited better performance than traditional CAD systems and manually examining images by expert radiologists or pathologists due to the limitations of current manual approaches [26]. In other words, AI-based methods avoid expensive and time-consuming manual inspection and effectively extract key and determinative information from high-resolution image data [26,27]. For example, a spectrum of diseases is associated with specific features, such as mammographic features. Thus, AI can learn these types of features from the structure of image data and then detect the spectrum of the disease assisting the radiologist or histopathologist. It is worth noting that in contrast to human inspection, algorithms are mainly similar to the black box and cannot understand the context, mode of collection, or meaning of viewed images, resulting in the problem of “shortcut” learning [28,29]. Thus, building interpretable AI-based models is necessary. AI models can generally be categorized into two groups to interpret and extract information from image data: (1) Traditional machine learning algorithms which need to receive handcrafted features derived from raw image data as preprocessing steps. (2) Deep learning algorithms that process raw images and try to extract features by mathematical optimization and multiple-level abstractions [30]. Although both approaches have shown promising results in breast cancer detection, recently, the latter approach has attracted more interest mainly because of its capability to learn the most salient representations of the data without human intervention to produce superior performance [31,32]. This review assesses and compresses recent datasets and AI-based models, specifically created by deep learning algorithms, used on TBD, PET, MRI, US, HP, and MM in breast cancer screening and detection. We also highlight the future direction in breast cancer detection via deep learning. This study can be summarized as follows: (1) Review of different imaging modalities for breast cancer screening. (2) Comparison of different deep learning models proposed in the most recent studies and their achieved performances on breast cancer classification, segmentation, detection, and other analysis. (3) Lastly, the conclusion of the paper and suggestions for future research directions. The main contributions of this paper can be listed as follows:
  • We reviewed different imaging tasks such as classification, segmentation, and detection through deep learning algorithms, while most of the existing review papers focus on a specific task.
  • We covered all available imaging modalities for breast cancer analysis in contrast to most of the existing studies that focus on single or two imaging modalities.
  • For each imaging modality, we summarized all available datasets.
  • We considered the most recent studies (2019–2022) on breast cancer imaging diagnosis employing deep learning models.

2. Imaging Modalities and Available Datasets for Breast Cancer

In this study, we summarize well-known imaging modalities for breast cancer diagnosis and analysis. As many existing studies have shown, there are several imaging modalities, including mammography, histopathology, ultrasound, magnetic resonance imaging, positron emission tomography, digital breast tomosynthesis, and a combination of these modalities (multimodalities) [10,32,33]. There are various public or private datasets for these modalities. Approximately 70% of available public datasets are related to mammography and ultrasound modalities demonstrating the prevalence of these methods, especially mammography, for breast cancer screening [31,32]. On the other hand, the researcher also widely utilized other modalities such as histopathology and MRI to confirm cancer and deal with difficulties related to mammography and ultrasound imaging modalities such as large variations in the image’s shape, morphological structure, and the density of breast tissues, etc. Here, we outline the aforementioned imaging modalities and available datasets for breast cancer detection.

2.1. Mammograms (MMs)

The advantages of mammograms, such as being cost-effective to detect tumors in the initial stage before development, mean that MMs are the most promising imaging screening technique in clinical practice. MMs are generally images of breasts produced by low-intensity X-rays (Figure 1) [33]. In this imaging modality, cancerous regions are brighter and more clear than other parts of breast tissue, helping to detect small variations in the composition of the tissues; therefore, it is used for the diagnosis and analysis of breast cancer [34,35] (Figure 1). Although MMs are the standard approach for breast cancer analysis, it is an inappropriate imaging modality for women with dense breasts [36], since the performance of MMs highly depends on specific tumor morphological characteristics [36,37]. To deal with this problem, using automated whole breast ultrasound (AWBU) or other methods are suggested with MMs to produce a more detailed image of breast tissues [38].
For various tasks in breast cancer analysis, such as breast lesion detection and classification, MMs are generally divided into two forms: screen film mammograms (SFM) and digital mammograms (DMM). DMM is widely categorized into three categories consisting of full-field digital mammograms (FFDM), digital breast tomosynthesis (DBT), and contrast-enhanced digital mammograms (CEDM) [39,40,41,42,43,44]. SFM was the standard imaging method in MMs because of its high sensitivity (100%) in the analysis and detection of lesions in breasts composed primarily of fatty tissue [45]. However, it has many drawbacks, including the following: (1) SFM imaging needs to be repeated with a higher radiation dose because some parts of the image in SFM have lesser contrast and cannot be further improved, and (2) various regions of the breast image are represented according to the characteristic response of the SFM [19,45]. Since 2010, DMM has replaced film as the primary screening modality. The main advantages of digital imaging over file systems are the higher contrast resolution and the ability to enlarge the image or change the contrast and brightness. These advantages help radiologists to detect subtle abnormalities, particularly in a background of dense breast tissue, more easily. Most studies comparing digital and film mammography performance have found little difference in cancer detection rates [46]. Digital mammography increases the chance of detecting invasive cancer in premenopausal and perimenopausal women and women with dense breasts. However, it increases false-positive findings as well [46]. Randomized mammographic trials/randomized controlled trials (RMT/RCT) represent the most important usage of MMs, through which large-scale screening for breast cancer analysis is performed. Despite the great capability of MMs for early-stage cancer detection, it is difficult to use MMs alone for detection. Because it requires additional screening tests along with mammographic trials/RMT such as breast self-examination (BSE) and clinical breast examination (CBE), which are more feasible methods to detect breast cancer at early stages to improve breast cancer survival [38,47,48]. Additionally, BSE and CBE avoid tremendous harm due to MMs screening, such as repeating the imaging process. More details about the advantages and disadvantages of MMs are provided in Table 1.
Figure 1. Example of breast cancer images using traditional film MMs. Reprinted/adapted with permission from [49]. 2021, Elsevier.
Figure 1. Example of breast cancer images using traditional film MMs. Reprinted/adapted with permission from [49]. 2021, Elsevier.
Cancers 14 05334 g001

2.2. Digital Breast Tomosynthesis (DBT)

DBT is a novel imaging modality making 3D images of breasts by the utilization of X-rays captured from different angles [50]. This method is similar to what is performed in mammograms, except the tube with the X-ray moves in a circular arc around the breast [51,52,53] (Figure 2). Repeated exposures to the breast tissue at different angles produce DBT images in half-millimeter slices. In this method, computational methods are utilized to collect information received from X-ray images to produce z-stack breast images and 2D reconstruction images [53,54]. In contrast to the conventional FSM method, DBT can easily cover the imaging of tumors from small to large size, especially in the case of small lesions and dense breasts [55]. However, the main challenging issue regarding the DBT is the long reading time because of the number of mammograms, the z-stack of images, and the number of recall rates for architectural distortion type of breast cancer abnormality [56]. After FFDM, DBT is the commonly used method for imaging modalities. Many studies recently used this imaging modality for breast cancer detection due to its favorable sensitivity and accuracy in screening and producing better details of tissue in breast cancer [57,58,59,60]. Table 1 provides details of the pros and cons of DBT for breast cancer analysis.

2.3. Ultrasound (US)

All of the aforementioned modalities can endanger patients and radiologists because of possible overdosage of ionizing radiation, making these approaches slightly risky and unhealthy for certain sensitive patients [62]. Additionally, these methods show low specificity, meaning the low ability to correctly determine a tissue without disease as a negative case. Therefore, although the aforementioned imaging modalities are highly used for early breast cancer detection, the US as a safe imaging modality has been used [62,63,64,65,66,67] (Figure 3). Compared to MMs, the US is a more convenient method for women with dense breasts. It is also useful to characterize abnormal regions and negative tumors detected by MMs [68]. Some studies showed the high accuracy of the US in detecting and discriminating benign and malignant masses [69]. US images are used in three broad combinations, i.e., (i) simple two-dimensional grayscale US images, (ii) color US images with shear wave elastography (SWE) added features, and (iii) Nakagami colored US images without any need for ionizing radiation [70,71]. It is worth noting that Nakagami-colored US images are responsible for the region of interest extraction by better detection of irregular masses in the breast. Moreover, US can be used as a complement to MMs owing to its availability, inexpensiveness compared to other modalities, and it being well tolerated by patients [70,72,73]. In a recent retrospective study, US breast imaging has shown high predictive value when combined with MMs images [74]. US images, along with MMs, improved the overall detection by about 20% and decreased unnecessary biopsy tasks by 40% in total [67]. Moreover, US is a reliable and valuable tool for metastatic lymph node screening in breast cancer patients. It is a cheap, noninvasive, easy-to-handle and cost-effective diagnostic method [75]. However, the US represents some limitations. For instance, the interpretation of US images is highly difficult and needs an expert radiologist to comprehensively understand these images. This is because of the complex nature of US images and the presence of speckle noise [76,77]. To deal with this issue, new technologies have been introduced in breast US imaging, such as automated breast ultrasound (ABUS). ABUS produces 3D images using wider probes. Shin et al. [78] improved how ABUS allows more appropriate image evaluation for large breast masses compared to conventional breast US. On the other hand, ABUS showed the lowest reliability in the prediction of residual tumor size and pCR (pathological complete response) [79]. Table 1 highlights more details about the weaknesses and strengths of the US imaging modality.

2.4. Magnetic Resonance Imaging (MRI)

MRI creates images of the whole breast and presents it as thin slices that cover the entire breast volume. It works based on radio frequency absorption of nuclei in the existence of potent magnetic fields. MRI uses a magnetic field along with radio waves to capture multiple breast images at different angles from a tissue [81,82,83] (Figure 4). By the combination of these images together, clear and detailed images of tissues are produced. Hence, MRI creates much clearer images for breast cancer analysis than other imaging modalities [84]. For instance, the MRI image shows many details clearly, leading to easy detection of lesions that are considered benign in other imaging modalities. Additionally, MRI is the most favorable method for breast cancer screening in women with dense breasts without any ionizing and other health risks, which we have seen in other modalities such as MMs [85,86]. Another interesting issue about MRI is its capability for producing high-quality images with a clearer view via the utilization of a contrast agent before taking MRI images [87,88]. Furthermore, MRI is more accurate than MM, DBT, and the US in evaluating residual tumors and predicting pCR [79,89], which helps clinicians to select appropriate patients for avoiding surgery after neoadjuvant chemotherapy (first-line treatment of breast cancer) when pCR is obtained [90,91]. Even though MRI exhibits promising advantages, such as high sensitivity, it shows low specificity, and it is time consuming and expensive, especially since its reading time is long [92,93]. It is worth noting that some new MRI-based methods, such as ultrafast breast MRI (UF-MRI), create much more efficient images with high screening specificity with short reading time [94,95]. Additionally, diffusion-weighted MR imaging (DWI-MRI) and dynamic contrast-enhanced MRI (DCE-MRI) provide higher volumetric resolution for better lesion visualization and lesion temporal pattern enhancement to use in breast cancer diagnosis and prognosis and correlation with genomics [53,81,96,97,98]. Details about various MRI-based methods and their pros and cons are available in Table 1.

2.5. Histopathology

Recently, various studies have confirmed that the gold standard for confirmation of breast cancer diagnosis, treatment, and management is given by the histopathological analysis of a section of the suspected area by a pathologist [99,100,101]. Histopathology consists of examining tissue lesion samples stained, for example, with hematoxylin and eosin (H&E) to produce colored histopathologic (HP) images for better visualization and detailed analysis of tissues [102,103,104] (Figure 5). Generally, HP images are obtained from a piece of suspicious human tissue to be tested and analyzed by a pathologist [105]. HP images are defined as gigapixel whole-slide images (WSI) from which some small patches are extracted to enhance the analysis of these WSI (Figure 5). In other words, pathologists try to extract small patches related to ROI from WSI to diagnose breast cancer subtypes, which is a great advantage of HPs, enabling them to classify multiple classes of breast cancer [106,107] for prognosis and treatment. Additionally, much more meaningful ROI can be derived from HPs, in contrast to other imaging modalities confirming outstanding authenticity for breast cancer classification, especially breast cancer subtype classification. Furthermore, one of the most important advantages of HPs is their capability to integrate multi-omics features to analyze and diagnose breast cancer with high confidence [108]. TCGA is the most favorable resource for breast histopathological images. The TCGA database is widely employed in multi-level omics integration investigations. In other words, within TCGA, HPs provide contextual features to extract morphological properties, while molecular information from omics data at different levels, including microRNA, CNV, and DNA methylation [108], are also available for each patient. Integrating morphology and multiomics information provides an opportunity to more accurately detect and classify breast cancer. Despite these advantages, HPs have some limitations. For example, analyzing multiple biopsy sections, such as converting an invasive biopsy to digital images, is a lengthy process requiring a high concentration level due to the cell structures’ microscopic size [109]. More drawbacks and advantages of the HP imagining modality are summarized in Table 1.

2.6. Positron Emission Tomography (PET)

PET uses radiotracers for visualizing and measuring the changes in metabolic processes and other physiological activities, such as blood flow, regional chemical composition, and absorption. PET is a recent effective imaging method showing the promising capability to measure tissues’ in vivo cellular, molecular, and biochemical properties (Figure 6). One of the key applications of PET is the analysis of breast cancer [110]. Studies highlighted that PET is a handy tool in staging advanced and inflammatory breast cancer and evaluating the response to treatment of the recurrent disease [34,35]. In contrast to the anatomic imaging method, PET highlights a more specific targeting of breast cancer with a larger margin between tumor and normal tissue, representing one step forward in cancer detection besides anatomic modalities [111,112,113]. Thus, the PET approach is used in hybrid modalities with CT for specific organ imaging to encourage the advantages of PET and improve spatial resolution, which is one of this modality’s strengths. Additionally, PET uses the integration of radionuclides with some elements or pharmaceutical compounds to form radiotracers, improving the performance of PET [114]. Fluorodeoxyglucose (FDG), a glucose analog, is most commonly used for most breast cancer imaging studies as an effective radiotracer developed for PET imaging [115]. Recent studies clarified a specific correlation between the degree of FDG uptake and several phenotypic features containing a tumor histologic type and grade, cell receptor expression, and cellular proliferation [116,117]. These correlations lead to making the FDG-PET system for breast cancer analysis such as diagnosis, staging, re-staging, and treatment response evaluation [111,118,119]. Another PET system is a breast-dedicated high-resolution PET system designed in a hanging breast imaging modality. Some studies demonstrate that these PET-based modalities can detect almost all breast lesions and cancerous regions [120]. Table 1 summarizes some of PET-based imaging modalities’ limitations and advantages. Also, in Table 2, we provided most commonly used public datasets for different imaging modalities in breast cancer detection.

3. Artificial Intelligence in Medical Image Analysis

Artificial intelligence (AI) has become very popular in the past few years because it adds human capabilities, e.g., learning, reasoning, and perception, to the software accurately and efficiently, and as a result, computers gain the ability to perform tasks that are usually carried out by humans. The recent advances in computing resources and availability of large datasets, as well as the development of new AI algorithms, have opened the path to the use of AI in many different areas, including but not limited to image synthesis [121], speech recognition [122,123] and engineering [124,125,126]. AI has been also employed in healthcare industries for applications such as protein engineering [127,128,129,130], cancer detection [131], and drug discovery [132,133]. More specifically, AI algorithms have shown an outstanding capability to discover complex patterns and extract discriminative features from medical images, providing higher-quality analysis and better quantitative results efficiently and automatically. AI has been a great help for physicians in imaging-related tasks, i.e., disease detection and diagnosis, to accomplish more accurate results [134]. Deep learning (DL) [30] is part of a broader family of AI which imitates the way humans learn. DL uses multiple layers to gain knowledge, and the complexity of the learned features increases hierarchically. DL algorithms have been applied in many applications, and in some of them, they could outperform humans. DL algorithms have also been used in various categories in the realm of cancer diagnosis using cancer images from different modalities, including detecting cancer cells, cancer type classification, lesion segmentation, etc. To learn more about DL, we refer interested readers to [135].

3.1. Benefits of Using DL for Medical Image Analysis

Comparing the healthcare area with others, it is safe to say that the decision-making process is much more crucial in healthcare systems than in other areas since it directly affects people’s lives. For example, a wrong decision by a physician in diagnosing a disease can lead to the death of a patient. Complex and constrained clinical environments and workflows make the physician’s decision-making very challenging, especially for image-related tasks since they require high visual perception and cognitive ability [136]. In these situations, AI can be a great tool to decrease the false-diagnosis rates by extracting specific and known features from the images or even helping the physician by giving an initial guess for the solution. Nowadays, more and more healthcare providers are encouraged to use AI algorithms due to the availability of computing resources, advancement in image analysis tools, and the great performance shown by AI methods.

3.2. Deep Learning Models for Breast Cancer Detection

This section briefly discusses the deep learning algorithms applied to images from each breast cancer modality.

3.2.1. Digital Mammography and Digital Breast Tomosynthesis (MM-DBT)

With the recent technology developments, MM images follow the same trend and take more advanced forms, e.g., digital breast tomosynthesis (DBT). Each MM form has been widely used for breast cancer detection and classification. One of the first attempts to use deep learning for MMs was carried out by [137]. The authors in [137] used a convolutional neural network (CNN)-based model to learn features from mammography images before feeding them to a support vector machine (SVM) classifier. Their algorithm could achieve 86% AUC in lesion classification, which had about 6% improvement compared to the best conventional approach before this paper. Following [137], more studies [138,139,140] have also used CNN-based algorithms for lesion classification. However, in these papers, the region of interest was extracted without the help of a deep learning algorithm, i.e., by employing traditional image processing methods [139] or by an expert [140]. More specifically, the authors in [138] first divided MM images into patches and extracted the features from the patches using a conventional image-processing algorithm, and then used the random forest classifier to choose good candidate patches for their CNN algorithm. Their approach could achieve an AUC of 92.9%, which is slightly better than the baseline method based on a conventional method with an AUC of 91%. With the advancement in DL algorithms and the availability of complex and powerful DL architectures, DL methods have been used to extract ROIs from full MM images. As a result, the input to the algorithm is no longer the small patches, and the full MM image could be used as input. For example, the proposed method in [131] uses YOLO [141], a well-known algorithm for detection and classification, to simultaneously extract and classify ROIs in the whole image. Their results show that their algorithm performs similarly to a CNN model trained on small patches with an AUC of 97%. Figure 7 shows the overall structure of the proposed model in [131].
To increase the accuracy of cancer detection, DBT has emerged as a predominant breast-imaging modality. It has been shown that DBT increases the cancer detection rate (CDR) while decreasing recall rates (RR) when compared to FFDM [142,143,144]. Following the same logic, some DL algorithms have been proposed to apply to DBT images for cancer detection [145,146,147,148,149]. For instance, the authors in [150] proposed a deep learning model based on ResNet architecture to classify the input images into normal, benign, high-risk, or malignant. They trained the model on an FFDM dataset, then fine-tuned the model using 2D reconstruction of DBT images obtained by applying the 2D maximum intensity projection (MIP) method. Their method achieved an AUC of 84.7% on the DBT dataset. A deep CNN has been developed in [145] that uses DBT volumes to classify the masses. Their proposed approach obtained an AUC of 84.7%, which is about 2% higher than the current CAD method with hand-crafted features.
Although deep learning models perform very well in medical image analysis, their major bottleneck is the thirst for training datasets. In the medical field, collecting and labeling data is very expensive. Some studies used transfer learning to overcome this problem. In the study by [151], the authors developed a two-stage transfer learning approach to classify DBT images as mass or normal. In the first stage, the authors fine-tuned a pretrained AlexNet [152] using FFDM images, and then the fine-tuned model was used to train a model using DBT images. The CNN model in the second stage was used as the feature extractor for DBT images, and the random forest classifier was used to classify the extracted features as mass or normal. They obtained an AUC of 90% on their test dataset. In another work in [153], the authors used a VGG19 [154] network trained on the ImageNet dataset as a feature extractor for FFDM and DBT images for malignant and benign classification. The extracted features were fed to an SVM classifier to estimate the probability of malignancy. Their method obtained an AUC of 98% and 97% on the DBT images in CC and MLO view, respectively. These methods show that by using a relatively small training dataset and employing transfer learning techniques, deep learning models can perform well. Most of the aforementioned studies compare their DL algorithms with traditional CAD methods. However, the best way to evaluate the performance of a DL method is to compare that with a radiologist directly. For example, the performance of DL systems on FFDM and DBT has been investigated in [155]. The study shows that a DL system can achieve comparable sensitivity as radiologists in FFDM images while decreasing the recall rate. Additionally, on DBT images, an AI system can have the same performance as radiologists, although the recall rate has increased.
Table 3 shows the list of recent DL-based models used for MM and DBT with their performances. The application of DL in breast cancer detection is not limited to mammography images. In the following section, we discuss the DL application in other breast cancer imaging modalities.

3.2.2. Ultrasound (US)

As has been explained in Section 2, ultrasound performs much better in detecting cancers and reduces unnecessary biopsy operations [183]. Therefore, it is not surprising to see that the researchers use this type of image in their DL models for cancer detection [184,185,186]. For instance, a GoogleNet [187]-based CNN has been trained on the suspicious ROIs of US images in [184]. The proposed method in [184] achieved an AUC of 96%, which is 6% higher than the CAD-based method with hand-crafted features. The authors in [188,189,190] trained CNN models directly with whole US images without extracting the ROIs. For example, the authors in [190] combined VGG19 and ResNet152 and trained the ensemble network on US images. Their proposed method achieved an AUC of 95% on a balanced, independent test dataset. Figure 8 represents an example of CNN models for breast cancer subtype classification.
In comparison with datasets for mammography images, there are fewer datasets for US images, and they usually contain much fewer images. Therefore, most of the proposed DL models use some kind of data augmentation method, such as rotation, to increase the size of training data and improve the model performance. However, one should be careful about how to augment US images since some augmentation may decrease the model performance. For example, it has been shown in [186] that performing the image rotation or shift in the longitudinal direction can affect the model performance negatively. The generative adversarial networks (GANs) can also be used to generate synthetic US images with or without tumors [191]. These images can be added to the original training images to improve the model’s accuracy.
The US images have also been used in lesion detection in which, when given an image, the CAD system decides whether the lesion is present. One of the challenges that the researcher faces in this type of problem with normal US images is that there is a need for a US doctor to manually select the images that have lesions for the models. This depends on the doctors’ availability and is usually expensive and time-consuming. It also adds human errors to the system [192]. To solve this problem, a method has been developed in [193] to detect the lesions in real time during US scanning. Another type of US imaging is called the 3D automated breast US scan, which captures the entire breast [194,195]. The authors in [195] developed a CNN model based on VGGNet, ResNet [196], and DenseNet [197] networks. Their approach obtained an AUC of 97% on their private dataset and an AUC of 97.11% on the breast ultrasound image (BUSI) dataset [80].
Some methods combined the detection and classification of lesions in US images in one step [198]. An extensive study in [199] compares different DL architectures for US image detection and classification. Their results show that the DenseNet is a good candidate for classification analysis of US images, which provides accuracies of 85% and 87.5% for full image classification and pre-defined ROIs, respectively. The authors in [200] developed a weakly supervised DL algorithm based on VGG16, ResNet34, and GoogleNet trained using 1000 unannotated US images. They have reported an average AUC of 88%.
Some studies validate the performance of DL algorithms [201,202,203] using expert inference, showing that DL algorithms can greatly help radiologists. This is mostly in cases where the lesion was already detected by an expert, and the DL model is used to classify them. However, unlike the mammography studies, most of the studies are not validated by multiple physicians and do not show the generalizability of their method on multiple datasets which should be addressed in future validations. Table 4 shows the list of recent algorithms used for US images and their performances.
Figure 8. Example of a model architecture for breast cancer subtypes classification from US images via CNN models [222].
Figure 8. Example of a model architecture for breast cancer subtypes classification from US images via CNN models [222].
Cancers 14 05334 g008

3.2.3. Magnetic Resonance Imaging (MRI)

As explained in Section 2, MRI has higher sensitivity for breast cancer detection in dense breasts [223] than MM and US images. However, the big difference between MRI and MM or US images is that the MRI is a 3D scan, but MM and US are 2D images. Moreover, MRI sequences are captured over time, increasing the MRI dimensionality to 4D (dynamic contrast-enhanced (DCE)-MRI). This makes MRI images more challenging for DL algorithms compared to MM and US images, as most of the current DL algorithms are built for 2D images. One way to address this challenge is to convert the 3D image to 2D by either dividing 3D MRIs into 2D slices [224,225] or using MIP to build a 2D representation [226]. Moreover, most DL algorithms have been developed for colored images, which are 3D images whose third dimension represents the color channels. However, the MRIs are grayscale images. Therefore, some developed MRI models put three consecutive slices of grayscale MRI together and build a 3D image [227,228]. Some other approaches modify the current 2D DL architecture to make them appropriate for MRI 3D scans [229].
All the above approaches have been used in lesion classification DL algorithms. For example, [230] uses 2D slices of the ROIs as input to their CNN model. They obtained an accuracy of 85% on their test dataset. The MIP technique is used in [231] which obtained an AUC of 89.5%. In the study carried out by Zhou et al. [229], the authors put the grayscale MRIs together and built 3D images for their DL methods. Their algorithm obtained an AUC of 92%. In another study presented in [193], the proposed algorithm uses the actual 3D MRI scans obtaining an AUC of 85.9% by the 3D version of DenseNet [197]. It is worth mentioning that the performance of 2D and 3D approaches cannot be compared since they used different datasets. However, some studies compared their proposed methods with radiologists’ interpretations [228,229]. Figure 9 shows a schematic of a framework for cancer subtype classification with MRI.
Like in MM and US images, the DL methods have been widely used in lesion detection and segmentation problems in MRI images. A CNN algorithm based on RetinaNet [232] has been developed in [233] for detecting lesions from the 4D MR scans. Their method obtained a sensitivity of 95%. One study [234] used a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation based on U-net architecture. Their method achieved the Dice similarity coefficient (DSC) of 0.72 for lesion segmentation. In another work [235], the authors proposed a U-net-based CNN model called 3TP U-net for the lesion segmentation task. Their algorithm obtained a Dice similarity coefficient of 61.24%. Alternatively, the authors in [236] developed a CNN-based segmentation model by refining the U-net architecture to segment the lesions in MRIs. Their proposed method achieved a Dice similarity coefficient of 86.5%. It has to be noted that in most lesion segmentation algorithms, there is a need for a mask that shows the pixels that belong to the breast as ground truth for training. These masks can help the models to focus on the right place and ignore the areas that do not have any information. Table 5 shows the list of recent algorithms used for MRI images and their performances.
Figure 9. A model architecture for cancer subtypes prediction via ResNet 50 and CNN models from MRI images [237]. Reprinted/adapted with permission from [237]. 2019, Elsevier.
Figure 9. A model architecture for cancer subtypes prediction via ResNet 50 and CNN models from MRI images [237]. Reprinted/adapted with permission from [237]. 2019, Elsevier.
Cancers 14 05334 g009

3.2.4. Histopathology

In contrast to other modalities, histopathology images are colored images that are provided either as the whole-slide images (WSI) or the extracted image patches from the WSI, i.e., ROIs that are extracted by pathologists. The histopathology images are a great means of diagnosing breast cancer types that are impossible to find with radiology images, i.e., MRIs. Moreover, these images have been used to detect cancer subtypes because of the details they have about the tissue. Therefore, they are widely used with DL algorithms for cancer detection. For example, Ref. [258] employed a CNN-based DL algorithm to classify the histopathology images into four classes: normal tissue, benign lesion, in situ carcinoma, and invasive carcinoma. They combined the classification results of all the image patches to obtain the final image-wise classification. They also used their model to classify the images into two classes, carcinoma, and non-carcinoma. An SVM has been trained on the features extracted by a CNN to classify the images. Their method obtained an accuracy of 77.8% on four-class classification and an accuracy of 83.3% on binary classification. In another work proposed in [259], two CNN models were developed, one for predicting malignancy and the other for predicting malignancy and image magnification levels simultaneously. They used images of size 700 × 460 with different magnification levels. Their average binary classification for benign/malignant is 83.25%. A novel framework was proposed in [260] that uses a hybrid attention-based mechanism to classify histopathology images. The attention mechanism helps to find the useful regions from raw images automatically.
The transfer learning approach has also been employed in analyzing histopathology images since the histopathology image datasets suffer from the lack of a large amount of data required for deep learning models. For example, the method developed in [261] uses pretrained Inception-V3 [187] and Inception-ResNet-V2 [262] and fine-tunes them for both binary and multiclass classification on histology images. Their approach obtained an accuracy of 97.9% in binary classification and an accuracy of 92.07% in the multi-classification task. In another work [263], the authors developed a framework for classifying malignant and benign cells that extracted the features from images using GoogleNet, VGGNet, and ResNet and then combined those features to use them in the classifier. Their framework obtained an average accuracy of 97%. The authors in [264] used a fine-tuned GoogleNet to extract features from the small patches of pathological images. The extracted features were fed to a bidirectional long short-term memory (LSTM) layer for classification. Their approach obtained an accuracy of 91.3%. Figure 10 shows the overview of the method proposed in [264]. GANs have also been combined with transfer learning to further increase classification accuracy. In work carried out in [265], StyleGAN [266] and Pix2Pix [267] were used to generate fake images. Then, VGG-16 and VGG-19 were fine-tuned to classify images. Their proposed method achieved an accuracy of 98.1% in binary classification.
Histopathology images have been widely used for nuclei detection and segmentation. For instance, in the work presented in [268], a novel framework called HASHI was developed that automatically detects invasive breast cancer in the whole slide images. Their framework obtained a Dice coefficient of 76% on their independent test dataset. In the other work performed in [269] for nuclei detection, a series of handcrafted features and features extracted from CNN were combined for better detection. The method used three different datasets and obtained an F-score of 90%. The authors in [270] presented a fully automated workflow for nuclei segmentation in histopathology images based on deep learning and the morphological properties extracted from the images. Their workflow achieved an accuracy and F1-score of 95.4% and 80.5%, respectively. In another work by [271], the authors first extracted the small patches from the high-resolution whole slides, then each small patch was segmented using a CNN along with an encoder-decoder; finally, to combine the local segmentation result, they used an improved merging strategy based on a fully connected conditional random field. Their algorithm obtained a segmentation accuracy of 95.6%. Table 6 shows the performance of recently developed DL methods in histology images.

3.2.5. Positron Emission Tomography (PET)/Computed Tomography (CT)

PET/CT is a nuclear medicine imaging technique that helps increase the effectiveness of detecting and classifying axillary lymph nodes and distant staging [272]. However, they have trouble detecting early-stage breast cancer. Therefore, it is not surprising that PET/CT is barely used with DL algorithms. However, PET/CT has some important applications that DL algorithms can be applied. For example, as discussed in [273], breast cancer is one of the reasons for most cases of bone metastasis. A CNN-based algorithm was developed in [274] to detect breast cancer metastasis on whole-body scintigraphy scans. Their algorithm obtained 92.5% accuracy in the binary classification of whole-body scans.
In the other application, PET/CT can be used to quantify the whole-body metabolic tumor volume (MTV) to reduce the labor and cost of obtaining MTV. For example, in the work presented in [275], a model trained on the MTV of lymphoma and lung cancer patients is used to detect the lesions in PET/CT scans of breast cancer patients. Their algorithm could detect 92% of the measurable lesions.
Table 6. The summary of the studies that used histopathology datasets.
Table 6. The summary of the studies that used histopathology datasets.
Zainudin et al. [276]2019Breast Cancer Cell Detection/ClassificationCNNMITOSAcc = 84.5%
TP = 80.55%
FP = 11.6%
Li et al. [277]2019Breast Cancer Cell Detection/ClassificationDeep cascade CNNMITOSIS
F-score = 56.2%
F-score = 67.3%
F-score = 66.9%
Das et al. [278]2019Breast Cancer Cell Detection/ClassificationCNNMITOS
F1-score = 84.05%
F1-score = 59.76%
Gour et al. [279]2020ClassificationCNNBreakHisAcc = 92.52%
F1 score = 93.45%
Saxena et al. [280]2020ClassificationCNNBreakHisAvg. Acc = 88%
Hirra et al. [281]2021ClassificationDBNDRYADAcc = 86%
Senan et al. [282]2021ClassificationCNNBreakHisAcc = 95%
AUC = 99.36%
Zewdie et al. [283]2021ClassificationCNNPrivate
Binary Acc = 96.75%
Grade classification Acc = 93.86%
Kushwaha et al. [284]2021ClassificationCNNBreakHisAcc = 97%
Gheshlaghi et al. [285]2021ClassificationAuxiliary Classifier GANBreakHisBinary Acc = 90.15%
Sub-type classification Acc = 86.33%
Reshma et al. [286]2022ClassificationGenetic Algorithm with CNNBreakHisAcc = 89.13%
Joseph et al. [287]2022ClassificationCNNBreakHisAvg. Multiclass Acc = 97%
Ahmad et al. [288]2022ClassificationCNNBreakHisAvg. Binary Acc = 99%
Avg. Multiclass Acc = 95%
Mathew et al. [289]2022Breast Cancer Cell Detection/ClassificationCNNATYPIA
F1 score = 61.91%
Singh and Kumar [290]2022ClassificationInception ResNetBHI
Acc = 85.21%
Avg. Acc = 84%
Mejbri et al. [291]2019Tissue-level SegmentationDNNsPrivateU-Net: Dice = 86%,
SegNet: Dice = 87%,
FCN: Dice = 86%,
DeepLab: Dice = 86%
Guo et al. [292]2019Cancer Regions SegmentationTransfer learning based on Inception-V3 and ResNet-101Camelyon16IOU = 80.4%
AUC = 96.2%
Priego-Torres et al. [271]2020Tumor SegmentationCNNPrivateAcc = 95.62%
IOU = 92.52%
Budginaitė et al. [293]2021Cell Nuclei SegmentationMicro-NetPrivateDice = 81%
Pedersen et al. [294]2022Tumor SegmentationCNNNorwegian cohort [295]Dice = 93.3%
Khalil et al. [296]2022Lymph node SegmentationCNNPrivateF1 score = 84.4%
IOU = 74.9%

4. Discussion

Breast cancer plays a crucial role in the mortality of women in the world. Cancer detection in its early stage is an essential task to reduce mortality. Recently, many imaging modalities have been used to give more detailed insights into breast cancer. However, manual analysis of these imaging modalities with a huge number of images is a difficult and time-consuming task leading to inaccurate diagnoses and an increased false-detection rate. Thus, to tackle these problems, an automated approach is needed. The most effective and reliable approach for medical image analysis is CAD. CAD systems have been designed to help physicians to reduce their errors in analyzing medical images. A CAD system highlights the suspicious features in images (e.g., masses) and helps radiologists to reduce false-negative readings. Moreover, CAD systems usually detect more false features than true marks, and it is the radiologist’s responsibility to evaluate the results. This characteristic of CAD systems increases the reading time and limits the number of cases that radiologists can evaluate. Recently, the advancement of AI, especially DL-based methods, could effectively speed up the image analysis process and help radiologists in early breast cancer diagnosis.
Considering the importance of DL-based CAD systems for breast cancer detection and diagnosis, in this paper, we have discussed the applications of different DL algorithms in breast cancer detection. We first reviewed the imaging modalities used for breast cancer screening and diagnosis. Besides a comprehensive discussion, we discussed the advantage and limitations of each imaging modality and summarize the public datasets available for each modality with the links to the datasets. We then reviewed the recent DL algorithms used for breast imaging analysis along with the detail of their datasets and results. The studies presented promising results from DL-based CAD systems. However, the DL-based CAD tools still face many challenges that prohibit them from clinical usage. Here, we discussed some of these challenges as well as the future direction for cancer detection studies.
One of the main obstacles to having a robust DL-based CAD tool is the cost of collecting medical images. The medical images used for DL algorithms should contain reliable annotated images from different patients. Data collection would be very costly for sufficient abnormal data compared to normal cases since the number of abnormal cases is much lower than the normal cases (e.g., several abnormal cases per thousand patients in the breast cancer screening population). The data collection also depends on the number of patients that takes a specific examination and the availability of equipment and protocols in different clinical settings. For example, MM datasets are usually very large datasets, including thousands of patients. However, the MRI or PET/CT datasets contain much fewer patients. Due to the existence of a large public dataset for MM, much more DL algorithms have been developed and validated for the MM modality than other datasets. One way to create a big dataset for different image modalities is multi-institutional collaboration. The dataset obtained from these collaborations covers a large group of patients with different characteristics, different imaging equipment, and clinical settings and protocols. These datasets make the DL algorithms more robust and reliable.
Currently available medical image datasets usually contain a small amount of data. On the other hand, employing DL and exploiting its capabilities on a small amount of training data is challenging. Because the DL algorithms should be trained on a large dataset to have a good performance. Some possible solutions can help to overcome the problems related to small datasets. For example, the datasets from different medical centers can be combined to create a bigger one. However, there are usually some patient privacy policies that should be addressed. Another solution to this problem is using federated learning [297] in which the algorithm is trained on datasets locally, but it should travel between the centers and be trained on the datasets in each center. The federated learning algorithms are not popular yet, and they are not widely implemented. In most cases, the training data cannot be publicly shared; therefore, there is no way to evaluate the DL methods and regenerate the results in the studies. Many studies used transfer learning to overcome the problem of small datasets. Some of the studies used a pre-trained model to extract features from the medical images and then, they used the extracted features to train a DL model for target tasks. However, other studies initialized their model with pre-trained model weights and then fine-tuned their models with the medical image datasets. Although transfer learning shows some improvement for the small datasets, the performance of the target model highly depends on the difference between the characteristics of source datasets and target datasets. In these cases, a negative transfer [298] may occur in which the source domain reduces the learning performance in the target domain. Some studies used data augmentation rather than transfer learning to increase the size of the dataset artificially and improve the model performance. However, one should note that augmenting data does not introduce the independent features to the model; therefore, it does not provide much new knowledge for the DL model compared to new independent images.
The shortage of datasets with comprehensive and fully labeled/annotated data is also another challenge that DL-based CAD systems face. Most of the DL methods are supervised algorithms, and they need fully labeled/annotated datasets. However, creating a large fully annotated dataset is a very challenging task since annotating medical images is time-consuming and may have human errors. To avoid the need for annotated datasets, some papers used unsupervised algorithms, but they obtained less accurate results compared to supervised algorithms.
Another important challenge is the generalizability of the DL algorithms. Most of the proposed approaches work on the datasets obtained with specific imaging characteristics and cannot be used for the datasets obtained from different populations, different clinical settings, or different imaging equipment and protocols. This is an obstacle to the wide use of AI methods in cancer detection in medical centers. Each health clinic should design and conduct a testing protocol for DL-based CAD systems using the data obtained from the local patient population before any clinical usage of these systems. During the testing period, the user should find the weaknesses and strengths of the system based on the output of the system for different input cases. The user should know that what is the characteristics of the failed and correct output and recognize when the system makes mistake and when it works fine. This testing procedure not only evaluates DL-based CAD models but also teaches the user the best way to use DL-based CAD systems.
Another limitation can be the interpretability of DL algorithms. Most DL algorithms are like a black box, and there are no suitable explanations for the decision, and feature selection happens during the training and learning processes. Radiologists usually do not prefer these uninterpretable DL algorithms because they need to understand the physical meaning of the decisions taken by the algorithms and which parts of images are highly discriminative. Recently, some DL-based algorithms such as DeepSHAP [299] were introduced to define an interpretable model to give more insight into the decision-making of DL algorithms in medical image analysis. Therefore, to increase physicians’ confidence and reliability of the decision made by DL tools, the utilization of interpretable approaches and proper explanation of DL algorithms is required for breast cancer analysis, helping widely used DL technology in clinical care applications such as breast cancer analysis.
DL algorithms show outstanding performance in analyzing imaging data. However, as discussed, there are still many challenges that they face. Besides DL algorithms, some studies show that using omics data instead of imaging data may lead to higher classification accuracy [108,300]. The omics data contain fewer but more effective features than imaging data. Moreover, the DL methods may extract the features from the images that are not relevant to the final label and those features may decrease the model performance. On the other hand, processing omics data is more expensive than image processing. Moreover, there are much more algorithms available for image processing than omics processing. Additionally, there are much more imaging data available than omics data.

5. Conclusions

Cancer detection in its early stage can improve the survival rate and reduce mortality. The rapid developments in deep learning-based techniques in medical image analysis algorithms along with the availability of large datasets and computational resources made it possible to improve breast cancer detection, diagnosis, prognosis, and treatment. Moreover, due to the capability of deep learning algorithms particularly CNNs, they have been very popular among the research community. In this research, comprehensive detail of the most recently employed deep learning methods is provided for different image modalities in different applications (e.g., classification, and segmentation). Despite outstanding performance by deep learning methods, they still face many challenges that should be addressed before deep learning can eventually influence clinical practices. Besides the challenges, ethical issues related to the explainability and interpretability of these systems need to be considered before deep learning can be expanded to its full potential in the clinical breast cancer imaging practice. Therefore, it is the responsibility of the research community to make the deep learning algorithms fully explainable before considering these systems as decision-making candidates in clinical practice.

Author Contributions

Data curation, M.M. and M.M.B.; writing—original draft preparation, M.M. and M.M.B.; writing—review and editing, M.M., M.M.B. and S.N.; supervision, S.N. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Conflicts of Interest

There are no conflict of interest.


  1. Zhou, X.; Li, C.; Rahaman, M.M.; Yao, Y.; Ai, S.; Sun, C.; Wang, Q.; Zhang, Y.; Li, M.; Li, X.; et al. A comprehensive review for breast histopathology image analysis using classical and deep neural networks. IEEE Access 2020, 8, 90931–90956. [Google Scholar] [CrossRef]
  2. Global Burden of 87 Risk Factors in 204 Countries and Territories, 1990–2019: A Systematic Analysis for the Global Burden of Disease Study 2019—ScienceDirect. Available online: (accessed on 21 July 2022).
  3. Anastasiadi, Z.; Lianos, G.D.; Ignatiadou, E.; Harissis, H.V.; Mitsis, M. Breast cancer in young women: An overview. Updat. Surg. 2017, 69, 313–317. [Google Scholar] [CrossRef] [PubMed]
  4. Chiao, J.-Y.; Chen, K.-Y.; Liao, K.Y.-K.; Hsieh, P.-H.; Zhang, G.; Huang, T.-C. Detection and classification the breast tumors using mask R-CNN on sonograms. Medicine 2019, 98, e15200. [Google Scholar] [CrossRef] [PubMed]
  5. Cruz-Roa, A. Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumour extent. Sci. Rep. 2017, 7, 46450. [Google Scholar] [CrossRef] [Green Version]
  6. Richie, R.C.; Swanson, J.O. Breast cancer: A review of the literature. J. Insur. Med. 2003, 35, 85–101. [Google Scholar]
  7. Youlden, D.R.; Cramb, S.M.; Dunn, N.A.M.; Muller, J.M.; Pyke, C.M.; Baade, P.D. The descriptive epidemiology of female breast cancer: An international comparison of screening, incidence, survival and mortality. Cancer Epidemiol. 2012, 36, 237–248. [Google Scholar] [CrossRef] [Green Version]
  8. Moghbel, M.; Ooi, C.Y.; Ismail, N.; Hau, Y.W.; Memari, N. A review of breast boundary and pectoral muscle segmentation methods in computer-aided detection/diagnosis of breast mammography. Artif. Intell. Rev. 2019, 53, 1873–1918. [Google Scholar] [CrossRef]
  9. Moghbel, M.; Mashohor, S. A review of computer assisted detection/diagnosis (CAD) in breast thermography for breast cancer detection. Artif. Intell. Rev. 2013, 39, 305–313. [Google Scholar] [CrossRef]
  10. Murtaza, G.; Shuib, L.; Wahab, A.W.A.; Mujtaba, G.; Nweke, H.F.; Al-Garadi, M.A.; Zulfiqar, F.; Raza, G.; Azmi, N.A. Deep learning-based breast cancer classification through medical imaging modalities: State of the art and research challenges. Artif. Intell. Rev. 2019, 53, 1655–1720. [Google Scholar] [CrossRef]
  11. Domingues, I.; Pereira, G.; Martins, P.; Duarte, H.; Santos, J.; Abreu, P.H. Using deep learning techniques in medical imaging: A systematic review of applications on CT and PET. Artif. Intell. Rev. 2019, 53, 4093–4160. [Google Scholar] [CrossRef]
  12. Kozegar, E.; Soryani, M.; Behnam, H.; Salamati, M.; Tan, T. Computer aided detection in automated 3-D breast ultrasound images: A survey. Artif. Intell. Rev. 2019, 53, 1919–1941. [Google Scholar] [CrossRef]
  13. Saha, M.; Chakraborty, C.; Racoceanu, D. Efficient deep learning model for mitosis detection using breast histopathology images. Comput. Med. Imaging Graph. 2018, 64, 29–40. [Google Scholar] [CrossRef]
  14. Suh, Y.J.; Jung, J.; Cho, B.-J. Automated Breast Cancer Detection in Digital Mammograms of Various Densities via Deep Learning. J. Pers. Med. 2020, 10, 211. [Google Scholar] [CrossRef] [PubMed]
  15. Cheng, H.D.; Shi, X.J.; Min, R.; Hu, L.M.; Cai, X.P.; Du, H.N. Approaches for automated detection and classification of masses in mammograms. Pattern Recognit. 2006, 39, 646–668. [Google Scholar] [CrossRef]
  16. Van Ourti, T.; O’Donnell, O.; Koç, H.; Fracheboud, J.; de Koning, H.J. Effect of screening mammography on breast cancer mortality: Quasi-experimental evidence from rollout of the Dutch population-based program with 17-year follow-up of a cohort. Int. J. Cancer 2019, 146, 2201–2208. [Google Scholar] [CrossRef] [PubMed]
  17. Sutanto, D.H.; Ghani, M.K.A. A Benchmark of Classification Framework for Non-Communicable Disease Prediction: A Review. ARPN J. Eng. Appl. Sci. 2015, 10, 15. [Google Scholar]
  18. Van Luijt, P.A.; Heijnsdijk, E.A.M.; Fracheboud, J.; Overbeek, L.I.H.; Broeders, M.J.M.; Wesseling, J.; Heeten, G.J.D.; de Koning, H.J. The distribution of ductal carcinoma in situ (DCIS) grade in 4232 women and its impact on overdiagnosis in breast cancer screening. Breast Cancer Res. 2016, 18, 47. [Google Scholar] [CrossRef] [Green Version]
  19. Baines, C.J.; Miller, A.B.; Wall, C.; McFarlane, D.V.; Simor, I.S.; Jong, R.; Shapiro, B.J.; Audet, L.; Petitclerc, M.; Ouimet-Oliva, D. Sensitivity and specificity of first screen mammography in the Canadian National Breast Screening Study: A preliminary report from five centers. Radiology 1986, 160, 295–298. [Google Scholar] [CrossRef]
  20. Houssami, N.; Macaskill, P.; Bernardi, D.; Caumo, F.; Pellegrini, M.; Brunelli, S.; Tuttobene, P.; Bricolo, P.; Fantò, C.; Valentini, M. Breast screening using 2D-mammography or integrating digital breast tomosynthesis (3D-mammography) for single-reading or double-reading–evidence to guide future screening strategies. Eur. J. Cancer 2014, 50, 1799–1807. [Google Scholar] [CrossRef] [Green Version]
  21. Houssami, N.; Hunter, K. The epidemiology, radiology and biological characteristics of interval breast cancers in population mammography screening. NPJ Breast Cancer 2017, 3, 12. [Google Scholar] [CrossRef]
  22. Massafra, R.; Comes, M.C.; Bove, S.; Didonna, V.; Diotaiuti, S.; Giotta, F.; Latorre, A.; La Forgia, D.; Nardone, A.; Pomarico, D.; et al. A machine learning ensemble approach for 5-and 10-year breast cancer invasive disease event classification. PLoS ONE 2022, 17, e0274691. [Google Scholar] [CrossRef] [PubMed]
  23. Chan, H.P.; Samala, R.K.; Hadjiiski, L.M. CAD and AI for breast cancer—Recent development and challenges. Br. J. Radiol. 2019, 93, 20190580. [Google Scholar] [CrossRef] [PubMed]
  24. Jannesari, M.; Habibzadeh, M.; Aboulkheyr, H.; Khosravi, P.; Elemento, O.; Totonchi, M.; Hajirasouliha, I. Breast Cancer Histopathological Image Classification: A Deep Learning Approach. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3–6 December 2018; pp. 2405–2412. [Google Scholar] [CrossRef]
  25. Rodriguez-Ruiz, A.; Lång, K.; Gubern-Merida, A.; Broeders, M.; Gennaro, G.; Clauser, P.; Helbich, T.H.; Chevalier, M.; Tan, T.; Mertelmeier, T.; et al. Stand-alone artificial intelligence for breast cancer detection in mammography: Comparison with 101 radiologists. JNCI J. Natl. Cancer Inst. 2019, 111, 916–922. [Google Scholar] [CrossRef] [PubMed]
  26. McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Back, T.; Chesus, M.; Corrado, G.S.; Darzi, A.; et al. International evaluation of an AI system for breast cancer screening. Nature 2020, 577, 89–94. [Google Scholar] [CrossRef] [PubMed]
  27. Obermeyer, Z.; Emanuel, E.J. Predicting the Future—Big Data, Machine Learning, and Clinical Medicine. N. Engl. J. Med. 2016, 375, 1216–1219. [Google Scholar] [CrossRef] [Green Version]
  28. Geirhos, R.; Jacobsen, J.H.; Michaelis, C.; Zemel, R.; Brendel, W.; Bethge, M.; Wichmann, F.A. Shortcut learning in deep neural networks. Nat. Mach. Intell. 2020, 2, 665–673. [Google Scholar] [CrossRef]
  29. Freeman, K.; Geppert, J.; Stinton, C.; Todkill, D.; Johnson, S.; Clarke, A.; Taylor-Phillips, S. Use of artificial intelligence for image analysis in breast cancer screening programmes: Systematic review of test accuracy. BMJ 2021, 374. [Google Scholar] [CrossRef]
  30. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  31. Burt, J.R.; Torosdagli, N.; Khosravan, N.; RaviPrakash, H.; Mortazi, A.; Tissavirasingham, F.; Hussein, S.; Bagci, U. Deep learning beyond cats and dogs: Recent advances in diagnosing breast cancer with deep neural networks. Br. J. Radiol. 2018, 91, 20170545. [Google Scholar] [CrossRef]
  32. Sharma, S.; Mehra, R. Conventional Machine Learning and Deep Learning Approach for Multi-Classification of Breast Cancer Histopathology Images—A Comparative Insight. J. Digit. Imaging 2020, 33, 632–654. [Google Scholar] [CrossRef]
  33. Hadadi, I.; Rae, W.; Clarke, J.; McEntee, M.; Ekpo, E. Diagnostic performance of adjunctive imaging modalities compared to mammography alone in women with non-dense and dense breasts: A systematic review and meta-analysis. Clin. Breast Cancer 2021, 21, 278–291. [Google Scholar] [CrossRef] [PubMed]
  34. Yassin, N.I.R.; Omran, S.; el Houby, E.M.F.; Allam, H. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. Comput. Methods Programs Biomed. 2018, 156, 25–45. [Google Scholar] [CrossRef] [PubMed]
  35. Saslow, D.; Boetes, C.; Burke, W.; Harms, S.; Leach, M.O.; Lehman, C.D.; Morris, E.; Pisano, E.; Schnall, M.; Sener, S.; et al. American Cancer Society guidelines for breast screening with MRI as an adjunct to mammography. CA A Cancer J. Clin. 2007, 57, 75–89. [Google Scholar]
  36. Park, J.; Chae, E.Y.; Cha, J.H.; Shin, H.J.; Choi, W.J.; Choi, Y.W.; Kim, H.H. Comparison of mammography, digital breast tomosynthesis, automated breast ultrasound, magnetic resonance imaging in evaluation of residual tumor after neoadjuvant chemotherapy. Eur. J. Radiol. 2018, 108, 261–268. [Google Scholar] [CrossRef] [PubMed]
  37. Huang, S.; Houssami, N.; Brennan, M.; Nickel, B. The impact of mandatory mammographic breast density notification on supplemental screening practice in the United States: A systematic review. Breast Cancer Res. Treat. 2021, 187, 11–30. [Google Scholar] [CrossRef] [PubMed]
  38. Cho, N.; Han, W.; Han, B.K.; Bae, M.S.; Ko, E.S.; Nam, S.J.; Chae, E.Y.; Lee, J.W.; Kim, S.H.; Kang, B.J.; et al. Breast cancer screening with mammography plus ultrasonography or magnetic resonance imaging in women 50 years or younger at diagnosis and treated with breast conservation therapy. JAMA Oncol. 2017, 3, 1495–1502. [Google Scholar] [CrossRef]
  39. Arevalo, J.; Gonzalez, F.A.; Ramos-Pollan, R.; Oliveira, J.L.; Lopez, M.A.G. Convolutional neural networks for mammography mass lesion classification. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 797–800. [Google Scholar] [CrossRef]
  40. Duraisamy, S.; Emperumal, S. Computer-aided mammogram diagnosis system using deep learning convolutional fully complex-valued relaxation neural network classifier. IET Comput. Vis. 2017, 11, 656–662. [Google Scholar] [CrossRef]
  41. Khan, M.H.-M. Automated breast cancer diagnosis using artificial neural network (ANN). In Proceedings of the 2017 3rd Iranian Conference on Intelligent Systems and Signal Processing (ICSPIS), Shahrood, Iran, 20–21 December 2017; pp. 54–58. [Google Scholar] [CrossRef]
  42. Hadad, O.; Bakalo, R.; Ben-Ari, R.; Hashoul, S.; Amit, G. Classification of breast lesions using cross-modal deep learning. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 109–112. [Google Scholar] [CrossRef]
  43. Kim, D.H.; Kim, S.T.; Ro, Y.M. Latent feature representation with 3-D multi-view deep convolutional neural network for bilateral analysis in digital breast tomosynthesis. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 927–931. [Google Scholar] [CrossRef]
  44. Comstock, C.E.; Gatsonis, C.; Newstead, G.M.; Snyder, B.S.; Gareen, I.F.; Bergin, J.T.; Rahbar, H.; Sung, J.S.; Jacobs, C.; Harvey, J.A.; et al. Comparison of Abbreviated Breast MRI vs Digital Breast Tomosynthesis for Breast Cancer Detection Among Women with Dense Breasts Undergoing Screening. JAMA 2020, 323, 746–756. [Google Scholar] [CrossRef]
  45. Debelee, T.G.; Schwenker, F.; Ibenthal, A.; Yohannes, D. Survey of deep learning in breast cancer image analysis. Evol. Syst. 2019, 11, 143–163. [Google Scholar] [CrossRef]
  46. Screening for Breast Cancer—ClinicalKey. Available online:!/content/book/3-s2.0-B9780323640596001237 (accessed on 27 July 2022).
  47. Chen, T.H.H.; Yen, A.M.F.; Fann, J.C.Y.; Gordon, P.; Chen, S.L.S.; Chiu, S.Y.H.; Hsu, C.Y.; Chang, K.J.; Lee, W.C.; Yeoh, K.G.; et al. Clarifying the debate on population-based screening for breast cancer with mammography: A systematic review of randomized controlled trials on mammography with Bayesian meta-analysis and causal model. Medicine 2017, 96, e5684. [Google Scholar] [CrossRef]
  48. Vieira, R.A.d.; Biller, G.; Uemura, G.; Ruiz, C.A.; Curado, M.P. Breast cancer screening in developing countries. Clinics 2017, 72, 244–253. [Google Scholar] [CrossRef]
  49. Abdelrahman, L.; al Ghamdi, M.; Collado-Mesa, F.; Abdel-Mottaleb, M. Convolutional neural networks for breast cancer detection in mammography: A survey. Comput. Biol. Med. 2021, 131, 104248. [Google Scholar] [CrossRef] [PubMed]
  50. Hooley, R.J.; Durand, M.A.; Philpotts, L.E. Advances in Digital Breast Tomosynthesis. Am. J. Roentgenol. 2017, 208, 256–266. [Google Scholar] [CrossRef] [PubMed]
  51. Gur, D.; Abrams, G.S.; Chough, D.M.; Ganott, M.A.; Hakim, C.M.; Perrin, R.L.; Rathfon, G.Y.; Sumkin, J.H.; Zuley, M.L.; Bandos, A.I. Digital breast tomosynthesis: Observer performance study. Am. J. Roentgenol. 2009, 193, 586–591. [Google Scholar]
  52. Østerås, B.H.; Martinsen, A.C.T.; Gullien, R.; Skaane, P. Digital Mammography versus Breast Tomosynthesis: Impact of Breast Density on Diagnostic Performance in Population-based Screening. Radiology 2019, 293, 60–68. [Google Scholar] [CrossRef]
  53. Zhang, J.; Ghate, S.V.; Grimm, L.J.; Saha, A.; Cain, E.H.; Zhu, Z.; Mazurowski, M.A. February. Convolutional encoder-decoder for breast mass segmentation in digital breast tomosynthesis. In Medical Imaging 2018: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2018; Volume 10575, pp. 639–644. [Google Scholar]
  54. Poplack, S.P.; Tosteson, T.D.; Kogel, C.A.; Nagy, H.M. Digital breast tomosynthesis: Initial experience in 98 women with abnormal digital screening mammography. AJR Am. J. Roentgenol. 2007, 189, 616–623. [Google Scholar] [CrossRef]
  55. Mun, H.S.; Kim, H.H.; Shin, H.J.; Cha, J.H.; Ruppel, P.L.; Oh, H.Y.; Chae, E.Y. Assessment of extent of breast cancer: Comparison between digital breast tomosynthesis and full-field digital mammography. Clin. Radiol. 2013, 68, 1254–1259. [Google Scholar] [CrossRef]
  56. Lourenco, A.P.; Barry-Brooks, M.; Baird, G.L.; Tuttle, A.; Mainiero, M.B. Changes in recall type and patient treatment following implementation of screening digital breast tomosynthesis. Radiology 2015, 274, 337–342. [Google Scholar] [CrossRef]
  57. Heywang-Köbrunner, S.H.; Jänsch, A.; Hacker, A.; Weinand, S.; Vogelmann, T. Digital breast tomosynthesis (DBT) plus synthesised two-dimensional mammography (s2D) in breast cancer screening is associated with higher cancer detection and lower recalls compared to digital mammography (DM) alone: Results of a systematic review and meta-analysis. Eur. Radiol. 2021, 32, 2301–2312. [Google Scholar]
  58. Alabousi, M.; Wadera, A.; Kashif Al-Ghita, M.; Kashef Al-Ghetaa, R.; Salameh, J.P.; Pozdnyakov, A.; Zha, N.; Samoilov, L.; Dehmoobad Sharifabadi, A.; Sadeghirad, B. Performance of digital breast tomosynthesis, synthetic mammography, and digital mammography in breast cancer screening: A systematic review and meta-analysis. JNCI J. Natl. Cancer Inst. 2020, 113, 680–690. [Google Scholar] [CrossRef]
  59. Durand, M.A.; Friedewald, S.M.; Plecha, D.M.; Copit, D.S.; Barke, L.D.; Rose, S.L.; Hayes, M.K.; Greer, L.N.; Dabbous, F.M.; Conant, E.F. False-negative rates of breast cancer screening with and without digital breast tomosynthesis. Radiology 2021, 298, 296–305. [Google Scholar] [CrossRef] [PubMed]
  60. Alsheik, N.; Blount, L.; Qiong, Q.; Talley, M.; Pohlman, S.; Troeger, K.; Abbey, G.; Mango, V.L.; Pollack, E.; Chong, A.; et al. Outcomes by race in breast cancer screening with digital breast tomosynthesis versus digital mammography. J. Am. Coll. Radiol. 2021, 18, 906–918. [Google Scholar] [CrossRef] [PubMed]
  61. Boisselier, A.; Mandoul, C.; Monsonis, B.; Delebecq, J.; Millet, I.; Pages, E.; Taourel, P. Reader performances in breast lesion characterization via DBT: One or two views and which view? Eur. J. Radiol. 2021, 142, 109880. [Google Scholar] [CrossRef] [PubMed]
  62. Fiorica, J.V. Breast Cancer Screening, Mammography, and Other Modalities. Clin. Obstet. Gynecol. 2016, 59, 688–709. [Google Scholar] [CrossRef] [PubMed]
  63. Jesneck, J.L.; Lo, J.Y.; Baker, J.A. Breast Mass Lesions: Computer-aided Diagnosis Models with Mammographic and Sonographic Descriptors. Radiology 2007, 244, 390–398. [Google Scholar] [CrossRef]
  64. Cheng, H.D.; Shan, J.; Ju, W.; Guo, Y.; Zhang, L. Automated breast cancer detection and classification using ultrasound images: A survey. Pattern Recognit. 2010, 43, 299–317. [Google Scholar] [CrossRef] [Green Version]
  65. Maxim, L.D.; Niebo, R.; Utell, M.J. Screening tests: A review with examples. Inhal. Toxicol. 2014, 26, 811–828. [Google Scholar] [CrossRef]
  66. Han, J.; Li, F.; Peng, C.; Huang, Y.; Lin, Q.; Liu, Y.; Cao, L.; Zhou, J. Reducing unnecessary biopsy of breast lesions: Preliminary results with combination of strain and shear-wave elastography. Ultrasound Med. Biol. 2019, 45, 2317–2327. [Google Scholar] [CrossRef]
  67. Zhi, H.; Ou, B.; Luo, B.-M.; Feng, X.; Wen, Y.-L.; Yang, H.-Y. Comparison of Ultrasound Elastography, Mammography, and Sonography in the Diagnosis of Solid Breast Lesions. J. Ultrasound Med. 2007, 26, 807–815. [Google Scholar] [CrossRef]
  68. Corsetti, V.; Houssami, N.; Ghirardi, M.; Ferrari, A.; Speziani, M.; Bellarosa, S.; Remida, G.; Gasparotti, C.; Galligioni, E.; Ciatto, S. Evidence of the effect of adjunct ultrasound screening in women with mammography-negative dense breasts: Interval breast cancers at 1 year follow-up. Eur. J. Cancer 2011, 47, 1021–1026. [Google Scholar] [CrossRef]
  69. Shin, S.Y.; Lee, S.; Yun, I.D.; Kim, S.M.; Lee, K.M. Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images. IEEE Trans. Med. Imaging 2018, 38, 762–774. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Becker, A.S.; Mueller, M.; Stoffel, E.; Marcon, M.; Ghafoor, S.; Boss, A. Classification of breast cancer in ultrasound imaging using a generic deep learning analysis software: A pilot study. Br. J. Radiol. 2017, 91, 20170576. [Google Scholar] [CrossRef] [PubMed]
  71. Youk, J.H.; Gweon, H.M.; Son, E.J. Shear-wave elastography in breast ultrasonography: The state of the art. Ultrasonography 2017, 36, 300–309. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. MARIBS study group. Screening with magnetic resonance imaging and mammography of a UK population at high familial risk of breast cancer: A prospective multicentre cohort study (MARIBS). Lancet 2005, 365, 1769–1778. [Google Scholar] [CrossRef]
  73. Kelly, K.; Dean, J.; Comulada, W.; Lee, S.-J. Breast cancer detection using automated whole breast ultrasound and mammography in radiographically dense breasts. Eur. Radiol. 2009, 20, 734–742. [Google Scholar] [CrossRef] [Green Version]
  74. Makanjuola, D.I.; Alkushi, A.; al Anazi, K. Defining radiologic complete response using a correlation of presurgical ultrasound and mammographic localization findings with pathological complete response following neoadjuvant chemotherapy in breast cancer. Eur. J. Radiol. 2020, 130, 109146. [Google Scholar] [CrossRef]
  75. Bove, S.; Comes, M.C.; Lorusso, V.; Cristofaro, C.; Didonna, V.; Gatta, G.; Giotta, F.; La Forgia, D.; Latorre, A.; Pastena, M.I.; et al. A ultrasound-based radiomic approach to predict the nodal status in clinically negative breast cancer patients. Sci. Rep. 2022, 12, 7914. [Google Scholar] [CrossRef]
  76. Stavros, A.T.; Thickman, D.; Rapp, C.L.; Dennis, M.A.; Parker, S.H.; Sisney, G.A. Solid breast nodules: Use of sonography to distinguish between benign and malignant lesions. Radiology 1995, 196, 123–134. [Google Scholar] [CrossRef] [Green Version]
  77. Yap, M.H. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 2017, 22, 1218–1226. [Google Scholar] [CrossRef] [Green Version]
  78. Shin, H.J.; Kim, H.H.; Cha, J.H. Current status of automated breast ultrasonography. Ultrasonography 2015, 34, 165–172. [Google Scholar] [CrossRef]
  79. Kolb, T.M.; Lichy, J.; Newhouse, J.H. Comparison of the performance of screening mammography, physical examination, and breast US and evaluation of factors that influence them: An analysis of 27,825 patient evaluations. Radiology 2002, 225, 165–175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief 2019, 28, 104863. [Google Scholar] [CrossRef] [PubMed]
  81. Antropova, N.O.; Abe, H.; Giger, M.L. Use of clinical MRI maximum intensity projections for improved breast lesion classification with deep convolutional neural networks. JMI 2018, 5, 014503. [Google Scholar] [CrossRef] [PubMed]
  82. Morrow, M.; Waters, J.; Morris, E. MRI for breast cancer screening, diagnosis, and treatment. Lancet 2011, 378, 1804–1811. [Google Scholar] [CrossRef]
  83. Kuhl, C.K.; Schrading, S.; Strobel, K.; Schild, H.H.; Hilgers, R.-D.; Bieling, H.B. Abbreviated Breast Magnetic Resonance Imaging (MRI): First Postcontrast Subtracted Images and Maximum-Intensity Projection—A Novel Approach to Breast Cancer Screening With MRI. JCO 2014, 32, 2304–2310. [Google Scholar] [CrossRef]
  84. Morris, E.A. Breast cancer imaging with MRI. Radiol. Clin. N. Am. 2002, 40, 443–466. [Google Scholar] [CrossRef]
  85. Teh, W.; Wilson, A.R.M. The role of ultrasound in breast cancer screening. A consensus statement by the European Group for breast cancer screening. Eur. J. Cancer 1998, 34, 449–450. [Google Scholar] [CrossRef]
  86. Sardanelli, F.; Giuseppetti, G.M.; Panizza, P.; Bazzocchi, M.; Fausto, A.; Simonetti, G.; Lattanzio, V.; Del Maschio, A. Sensitivity of MRI Versus Mammography for Detecting Foci of Multifocal, Multicentric Breast Cancer in Fatty and Dense Breasts Using the Whole-Breast Pathologic Examination as a Gold Standard. Am. J. Roentgenol. 2004, 183, 1149–1157. [Google Scholar] [CrossRef]
  87. Rasti, R.; Teshnehlab, M.; Phung, S.L. Breast cancer diagnosis in DCE-MRI using mixture ensemble of convolutional neural networks. Pattern Recognit. 2017, 72, 381–390. [Google Scholar] [CrossRef] [Green Version]
  88. Mann, R.M.; Kuhl, C.K.; Kinkel, K.; Boetes, C. Breast MRI: Guidelines from the European Society of Breast Imaging. Eur. Radiol. 2008, 18, 1307–1318. [Google Scholar] [CrossRef]
  89. Pasquero, G.; Surace, A.; Ponti, A.; Bortolini, M.; Tota, D.; Mano, M.P.; Arisio, R.; Benedetto, C.; Baù, M.G. Role of Magnetic Resonance Imaging in the Evaluation of Breast Cancer Response to Neoadjuvant Chemotherapy. Vivo 2020, 34, 909–915. [Google Scholar] [CrossRef] [Green Version]
  90. Kim, Y.; Sim, S.H.; Park, B.; Chae, I.H.; Han, J.H.; Jung, S.-Y.; Lee, S.; Kwon, Y.; Park, I.H.; Ko, K.; et al. Criteria for identifying residual tumours after neoadjuvant chemotherapy of breast cancers: A magnetic resonance imaging study. Sci. Rep. 2021, 11, 634. [Google Scholar] [CrossRef] [PubMed]
  91. Massafra, R.; Comes, M.C.; Bove, S.; Didonna, V.; Gatta, G.; Giotta, F.; Fanizzi, A.; La Forgia, D.; Latorre, A.; Pastena, M.I.; et al. Robustness Evaluation of a Deep Learning Model on Sagittal and Axial Breast DCE-MRIs to Predict Pathological Complete Response to Neoadjuvant Chemotherapy. J. Pers. Med. 2022, 12, 953. [Google Scholar] [CrossRef] [PubMed]
  92. Houssami, N.; Cho, N. Screening women with a personal history of breast cancer: Overview of the evidence on breast imaging surveillance. Ultrasonography 2018, 37, 277–287. [Google Scholar] [CrossRef] [PubMed]
  93. Greenwood, H.I. Abbreviated protocol breast MRI: The past, present, and future. Clin. Imaging 2019, 53, 169–173. [Google Scholar] [CrossRef]
  94. Van Zelst, J.C.M.; Vreemann, S.; Witt, H.-J.; Gubern-Merida, A.; Dorrius, M.D.; Duvivier, K.; Lardenoije-Broker, S.; Lobbes, M.B.; Loo, C.; Veldhuis, W.; et al. Multireader Study on the Diagnostic Accuracy of Ultrafast Breast Magnetic Resonance Imaging for Breast Cancer Screening. Investig. Radiol. 2018, 53, 579–586. [Google Scholar] [CrossRef]
  95. Heller, S.L.; Moy, L. MRI breast screening revisited. J. Magn. Reson. Imaging 2019, 49, 1212–1221. [Google Scholar] [CrossRef]
  96. Rauch, G.M.; Adrada, B.E.; Kuerer, H.M.; Van La Parra, R.F.D.; Leung, J.W.T.; Yang, W.T. Multimodality Imaging for Evaluating Response to Neoadjuvant Chemotherapy in Breast Cancer. Am. J. Roentgenol. 2016, 208, 290–299. [Google Scholar] [CrossRef]
  97. Mahrooghy, M.; Ashraf, A.B.; Daye, D.; McDonald, E.S.; Rosen, M.; Mies, C.; Feldman, M.; Kontos, D. Pharmacokinetic Tumor Heterogeneity as a Prognostic Biomarker for Classifying Breast Cancer Recurrence Risk. IEEE Trans. Biomed. Eng. 2015, 62, 1585–1594. [Google Scholar] [CrossRef]
  98. Mazurowski, M.A.; Grimm, L.J.; Zhang, J.; Marcom, P.K.; Yoon, S.C.; Kim, C.; Ghate, S.V.; Johnson, K.S. Recurrence-free survival in breast cancer is associated with MRI tumor enhancement dynamics quantified using computer algorithms. Eur. J. Radiol. 2015, 84, 2117–2122. [Google Scholar] [CrossRef]
  99. Jiang, Y.; Chen, L.; Zhang, H.; Xiao, X. Breast cancer histopathological image classification using convolutional neural networks with small SE-ResNet module. PLoS ONE 2019, 14, e0214587. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Yan, R.; Ren, F.; Wang, Z.; Wang, L.; Ren, Y.; Liu, Y.; Rao, X.; Zheng, C.; Zhang, F. A hybrid convolutional and recurrent deep neural network for breast cancer pathological image classification. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); IEEE: Piscataway, NJ, USA, 2018; pp. 957–962. [Google Scholar]
  101. Bejnordi, B.E.; Zuidhof, G.C.A.; Balkenhol, M.; Hermsen, M.; Bult, P.; Van Ginneken, B.; Karssemeijer, N.; Litjens, G.; Van Der Laak, J. Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. J. Med. Imaging 2017, 4, 044504. [Google Scholar] [CrossRef] [PubMed]
  102. Jimenez-del-Toro, O.; Otálora, S.; Andersson, M.; Eurén, K.; Hedlund, M.; Rousson, M.; Müller, H.; Atzori, M. Analysis of Histopathology Images: From Traditional Machine Learning to Deep Learning. In Biomedical Texture Analysis; Academic Press: Cambridge, MA, USA, 2017; pp. 281–314. [Google Scholar]
  103. Roy, K.; Banik, D.; Bhattacharjee, D.; Nasipuri, M. Patch-based system for Classification of Breast Histology images using deep learning. Comput. Med. Imaging Graph. 2018, 71, 90–103. [Google Scholar] [CrossRef] [PubMed]
  104. Tellez, D.; Balkenhol, M.; Karssemeijer, N.; Litjens, G.; van der Laak, J.; Ciompi, F. March. H and E stain augmentation improves generalization of convolutional networks for histopathological mitosis detection. In Medical Imaging 2018: Digital Pathology; SPIE: Bellingham, WA, USA, 2018; Volume 10581, pp. 264–270. [Google Scholar]
  105. Aswathy, M.A.; Jagannath, M. Detection of breast cancer on digital histopathology images: Present status and future possibilities. Inform. Med. Unlocked 2017, 8, 74–79. [Google Scholar] [CrossRef] [Green Version]
  106. Araújo, T.; Aresta, G.; Castro, E.M.; Rouco, J.; Aguiar, P.; Eloy, C.; Polónia, A.; Campilho, A. Classification of breast cancer histology images using Convolutional Neural Networks. PLoS ONE 2017, 12, e0177544. [Google Scholar]
  107. Bardou, D.; Zhang, K.; Ahmad, S.M. Classification of Breast Cancer Based on Histology Images Using Convolutional Neural Networks. IEEE Access 2018, 6, 24680–24693. [Google Scholar] [CrossRef]
  108. Wu, C.; Zhou, F.; Ren, J.; Li, X.; Jiang, Y.; Ma, S. A Selective Review of Multi-Level Omics Data Integration Using Variable Selection. High-Throughput 2019, 8, 4. [Google Scholar] [CrossRef] [Green Version]
  109. Zeiser, F.A.; da Costa, C.A.; Roehe, A.V.; Righi, R.D.R.; Marques, N.M.C. Breast cancer intelligent analysis of histopathological data: A systematic review. Appl. Soft Comput. 2021, 113, 107886. [Google Scholar] [CrossRef]
  110. Flanagan, F.L.; Dehdashti, F.; Siegel, B.A. PET in breast cancer. Semin. Nucl. Med. 1998, 28, 290–302. [Google Scholar] [CrossRef]
  111. Groheux, D.; Hindie, E. Breast cancer: Initial workup and staging with FDG PET/CT. Clin. Transl. Imaging 2021, 9, 221–231. [Google Scholar] [CrossRef]
  112. Fowler, A.M.; Strigel, R.M. Clinical advances in PET–MRI for breast cancer. Lancet Oncol. 2022, 23, e32–e43. [Google Scholar] [CrossRef]
  113. Vercher-Conejero, J.L.; Pelegrí-Martinez, L.; Lopez-Aznar, D.; Cózar-Santiago, M.D.P. Positron Emission Tomography in Breast Cancer. Diagnostics 2015, 5, 61–83. [Google Scholar] [CrossRef] [PubMed]
  114. Gillies, R. In vivo molecular imaging. J. Cell. Biochem. 2002, 87, 231–238. [Google Scholar] [CrossRef] [PubMed]
  115. Mankoff, D.A.; Eary, J.F.; Link, J.M.; Muzi, M.; Rajendran, J.G.; Spence, A.M.; Krohn, K.A. Tumor-specific positron emission tomography imaging in patients: [18F] fluorodeoxyglucose and beyond. Clin. Cancer Res. 2007, 13, 3460–3469. [Google Scholar] [CrossRef] [Green Version]
  116. Avril, N.; Menzel, M.; Dose, J.; Schelling, M.; Weber, W.; Janicke, F.; Nathrath, W.; Schwaiger, M. Glucose metabolism of breast cancer assessed by 18F-FDG PET: Histologic and immunohistochemical tissue analysis. J. Nucl. Med. 2001, 42, 9–16. [Google Scholar]
  117. Pijl, J.P.; Nienhuis, P.H.; Kwee, T.C.; Glaudemans, A.W.; Slart, R.H.; Gormsen, L.C. Limitations and Pitfalls of FDG-PET/CT in Infection and Inflammation. Semin. Nucl. Med. 2021, 51, 633–645. [Google Scholar] [CrossRef]
  118. Han, S.; Choi, J.Y. Impact of 18F-FDG PET, PET/CT, and PET/MRI on Staging and Management as an Initial Staging Modality in Breast Cancer. Clin. Nucl. Med. 2021, 46, 271–282. [Google Scholar] [CrossRef]
  119. Le Boulc’h, M.; Gilhodes, J.; Steinmeyer, Z.; Molière, S.; Mathelin, C. Pretherapeutic Imaging for Axillary Staging in Breast Cancer: A Systematic Review and Meta-Analysis of Ultrasound, MRI and FDG PET. J. Clin. Med. 2021, 10, 1543. [Google Scholar] [CrossRef]
  120. Koolen, B.B.; Aukema, T.S.; González Martínez, A.J.; Vogel, W.V.; Caballero Ontanaya, L.; Vrancken Peeters, M.J.; Vroonland, C.J.; Rutgers, E.J.; Benlloch Baviera, J.M.; Valdés Olmos, R.A. First clinical experience with a dedicated PET for hanging breast molecular imaging. Q. J. Nucl. Med. Mol. Imaging 2013, 57, 92–100. [Google Scholar] [CrossRef]
  121. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. 2022, pp. 10684–10695. Available online: (accessed on 24 July 2022).
  122. Baevski, A.; Hsu, W.-N.; Conneau, A.; Auli, M. Unsupervised Speech Recognition. In Advances in Neural Information Processing Systems; MTI Press: Cambridge, MA, USA, 2021; Volume 34, pp. 27826–27839. [Google Scholar]
  123. Shahamiri, S.R. Speech Vision: An End-to-End Deep Learning-Based Dysarthric Automatic Speech Recognition System. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 852–861. [Google Scholar] [CrossRef]
  124. Behzadi, M.M.; Ilies, H.T. GANTL: Towards Practical and Real-Time Topology Optimization with Conditional GANs and Transfer Learning. J. Mech. Des. 2021, 144, 1–32. [Google Scholar] [CrossRef]
  125. Behzadi, M.M.; Ilieş, H.T. Real-Time Topology Optimization in 3D via Deep Transfer Learning. Comput. Des. 2021, 135, 103014. [Google Scholar] [CrossRef]
  126. Madani, M.; Tarakanova, A. Molecular Design of Soluble Zein Protein Sequences. Biophys. J. 2020, 118, 45a. [Google Scholar] [CrossRef]
  127. Madani, M.; Lin, K.; Tarakanova, A. DSResSol: A sequence-based solubility predictor created with Dilated Squeeze Excitation Residual Networks. Int. J. Mol. Sci. 2021, 22, 13555. [Google Scholar] [CrossRef]
  128. Madani, M.; Behzadi, M.M.; Song, D.; Ilies, H.; Tarakanova, A. CGAN-Cmap: Protein contact map prediction using deep generative adversarial neural networks. bioRxiv 2022. [Google Scholar] [CrossRef]
  129. Kunkel, G.; Madani, M.; White, S.J.; Verardi, P.H.; Tarakanova, A. Modeling coronavirus spike protein dynamics: Implications for immunogenicity and immune escape. Biophys. J. 2021, 120, 5592–5618. [Google Scholar] [CrossRef]
  130. Madani, M.; Tarakanova, A. Characterization of Mechanics and Tunability of Resilin Protein by Molecular Dynamics Simulation. Biophys. J. 2020, 118, 45a–46a. [Google Scholar] [CrossRef]
  131. Dildar, M.; Akram, S.; Irfan, M.; Khan, H.U.; Ramzan, M.; Mahmood, A.R.; Alsaiari, S.A.; Saeed, A.H.M.; Alraddadi, M.O.; Mahnashi, M.H. Skin Cancer Detection: A Review Using Deep Learning Techniques. Int. J. Environ. Res. Public Health 2021, 18, 5479. [Google Scholar] [CrossRef]
  132. Kim, J.; Park, S.; Min, D.; Kim, W. Comprehensive Survey of Recent Drug Discovery Using Deep Learning. Int. J. Mol. Sci. 2021, 22, 9983. [Google Scholar] [CrossRef]
  133. Zhang, L.; Tan, J.; Han, D.; Zhu, H. From machine learning to deep learning: Progress in machine intelligence for rational drug discovery. Drug Discov. Today 2017, 22, 1680–1685. [Google Scholar] [CrossRef]
  134. Kumar, Y.; Koul, A.; Singla, R.; Ijaz, M.F. Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. J. Ambient Intell. Humaniz. Comput. 2022, 1–28. [Google Scholar] [CrossRef] [PubMed]
  135. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  136. Fitzgerald, R. Error in Radiology. Clin. Radiol. 2001, 56, 938–946. [Google Scholar] [CrossRef] [PubMed]
  137. Kooi, T.; Gubern-Merida, A.; Mordang, J.J.; Mann, R.; Pijnappel, R.; Schuur, K.; Heeten, A.D.; Karssemeijer, N. A comparison between a deep convolutional neural network and radiologists for classifying regions of interest in mammography. In Proceedings of the International Workshop on Breast Imaging, Malmö, Sweden, 19–22 June 2016; Springer: Cham, Switzerland, 2016; pp. 51–56. [Google Scholar]
  138. Kooi, T.; Litjens, G.; van Ginneken, B.; Gubern-Mérida, A.; Sánchez, C.I.; Mann, R.; den Heeten, A.; Karssemeijer, N. Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 2017, 35, 303–312. [Google Scholar] [CrossRef] [PubMed]
  139. Samala, R.K.; Chan, H.-P.; Hadjiiski, L.M.; Cha, K.; Helvie, M.A. Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis. In Medical Imaging 2016: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2016; Volume 9785, pp. 234–240. [Google Scholar] [CrossRef]
  140. Huynh, B.Q.; Li, H.; Giger, M.L. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. J. Med. Imaging 2016, 3, 034501. [Google Scholar] [CrossRef]
  141. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
  142. Skaane, P.; Sebuødegård, S.; Bandos, A.I.; Gur, D.; Østerås, B.H.; Gullien, R.; Hofvind, S. Performance of breast cancer screening using digital breast tomosynthesis: Results from the prospective population-based Oslo Tomosynthesis Screening Trial. Breast Cancer Res. Treat. 2018, 169, 489–496. [Google Scholar] [CrossRef]
  143. Skaane, P.; Bandos, A.I.; Niklason, L.T.; Sebuødegård, S.; Østerås, B.H.; Gullien, R.; Gur, D.; Hofvind, S. Digital Mammography versus Digital Mammography Plus Tomosynthesis in Breast Cancer Screening: The Oslo Tomosynthesis Screening Trial. Radiology 2019, 291, 23–30. [Google Scholar] [CrossRef]
  144. Haas, B.M.; Kalra, V.; Geisel, J.; Raghu, M.; Durand, M.; Philpotts, L.E. Comparison of Tomosynthesis Plus Digital Mammography and Digital Mammography Alone for Breast Cancer Screening. Radiology 2013, 269, 694–700. [Google Scholar] [CrossRef]
  145. Pinto, M.C.; Rodriguez-Ruiz, A.; Pedersen, K.; Hofvind, S.; Wicklein, J.; Kappler, S.; Mann, R.M.; Sechopoulos, I. Impact of artificial intelligence decision support using deep learning on breast cancer screening interpretation with single-view wide-angle digital breast tomosynthesis. Radiology 2021, 300, 529–536. [Google Scholar] [CrossRef]
  146. Kooi, T.; Karssemeijer, N. Classifying symmetrical differences and temporal change for the detection of malignant masses in mammography using deep neural networks. J. Med. Imaging 2017, 4, 044501. [Google Scholar] [CrossRef]
  147. Wu, N.; Phang, J.; Park, J.; Shen, Y.; Huang, Z.; Zorin, M.; Jastrzebski, S.; Fevry, T.; Katsnelson, J.; Kim, E.; et al. Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening. IEEE Trans. Med. Imaging 2019, 39, 1184–1194. [Google Scholar] [CrossRef]
  148. Loizidou, K.; Skouroumouni, G.; Pitris, C.; Nikolaou, C. Digital subtraction of temporally sequential mammograms for improved detection and classification of microcalcifications. Eur. Radiol. Exp. 2021, 5, 40. [Google Scholar] [CrossRef] [PubMed]
  149. Yang, Z.; Cao, Z.; Zhang, Y.; Tang, Y.; Lin, X.; Ouyang, R.; Wu, M.; Han, M.; Xiao, J.; Huang, L.; et al. MommiNet-v2: Mammographic multi-view mass identification networks. Med. Image Anal. 2021, 73, 102204. [Google Scholar] [CrossRef] [PubMed]
  150. Singh, S.; Matthews, T.P.; Shah, M.; Mombourquette, B.; Tsue, T.; Long, A.; Almohsen, R.; Pedemonte, S.; Su, J. Adaptation of a deep learning malignancy model from full-field digital mammography to digital breast tomosynthesis. In Medical Imaging 2020: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2020; Volume 11314, pp. 25–32. [Google Scholar]
  151. Samala, R.K.; Chan, H.-P.; Hadjiiski, L.M.; A Helvie, M.; Richter, C.; Cha, K. Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis. Phys. Med. Biol. 2018, 63, 095005. [Google Scholar] [CrossRef] [PubMed]
  152. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  153. Mendel, K.; Li, H.; Sheth, D.; Giger, M. Transfer Learning from Convolutional Neural Networks for Computer-Aided Diagnosis: A Comparison of Digital Breast Tomosynthesis and Full-Field Digital Mammography. Acad. Radiol. 2019, 26, 735–743. [Google Scholar] [CrossRef]
  154. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015; Computational and Biological Learning Society: New York, NY, USA, 2015; pp. 1–14. [Google Scholar]
  155. Romero-Martín, S.; Elías-Cabot, E.; Raya-Povedano, J.L.; Gubern-Mérida, A.; Rodríguez-Ruiz, A.; Álvarez-Benito, M. Stand-Alone Use of Artificial Intelligence for Digital Mammography and Digital Breast Tomosynthesis Screening: A Retrospective Evaluation. Radiology 2022, 302, 535–542. [Google Scholar] [CrossRef]
  156. Shu, X.; Zhang, L.; Wang, Z.; Lv, Q.; Yi, Z. Deep Neural Networks with Region-Based Pooling Structures for Mammographic Image Classification. IEEE Trans. Med. Imaging 2020, 39, 2246–2255. [Google Scholar] [CrossRef]
  157. Boumaraf, S.; Liu, X.; Ferkous, C.; Ma, X. A New Computer-Aided Diagnosis System with Modified Genetic Feature Selection for BI-RADS Classification of Breast Masses in Mammograms. BioMed Res. Int. 2020, 2020, e7695207. [Google Scholar] [CrossRef]
  158. Matthews, T.P.; Singh, S.; Mombourquette, B.; Su, J.; Shah, M.P.; Pedemonte, S.; Long, A.; Maffit, D.; Gurney, J.; Hoil, R.M.; et al. A Multisite Study of a Breast Density Deep Learning Model for Full-Field Digital Mammography and Synthetic Mammography. Radiol. Artif. Intell. 2021, 3, e200015. [Google Scholar] [CrossRef]
  159. Zhang, Y.-D.; Satapathy, S.C.; Guttery, D.S.; Górriz, J.M.; Wang, S.-H. Improved Breast Cancer Classification Through Combining Graph Convolutional Network and Convolutional Neural Network. Inf. Process. Manag. 2020, 58, 102439. [Google Scholar] [CrossRef]
  160. Li, H.; Mukundan, R.; Boyd, S. Novel Texture Feature Descriptors Based on Multi-Fractal Analysis and LBP for Classifying Breast Density in Mammograms. J. Imaging 2021, 7, 205. [Google Scholar] [CrossRef] [PubMed]
  161. Saber, A.; Sakr, M.; Abo-Seida, O.M.; Keshk, A.; Chen, H. A Novel Deep-Learning Model for Automatic Detection and Classification of Breast Cancer Using the Transfer-Learning Technique. IEEE Access 2021, 9, 71194–71209. [Google Scholar] [CrossRef]
  162. Malebary, S.J.; Hashmi, A. Automated Breast Mass Classification System Using Deep Learning and Ensemble Learning in Digital Mammogram. IEEE Access 2021, 9, 55312–55328. [Google Scholar] [CrossRef]
  163. Li, H.; Niu, J.; Li, D.; Zhang, C. Classification of breast mass in two-view mammograms via deep learning. IET Image Process. 2020, 15, 454–467. [Google Scholar] [CrossRef]
  164. Ueda, D.; Yamamoto, A.; Onoda, N.; Takashima, T.; Noda, S.; Kashiwagi, S.; Morisaki, T.; Fukumoto, S.; Shiba, M.; Morimura, M.; et al. Development and validation of a deep learning model for detection of breast cancers in mammography from multi-institutional datasets. PLoS ONE 2022, 17, e0265751. [Google Scholar] [CrossRef]
  165. Mota, A.M.; Clarkson, M.J.; Almeida, P.; Matela, N. Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs. J. Imaging 2022, 8, 231. [Google Scholar] [CrossRef]
  166. Bai, J.; Jin, A.; Jin, A.; Wang, T.; Yang, C.; Nabavi, S. Applying graph convolution neural network in digital breast tomosynthesis for cancer classification. In Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, Northbrook, IL, USA, 7–10 August 2022; pp. 1–10. [Google Scholar] [CrossRef]
  167. Zhu, W.; Xiang, X.; Tran, T.D.; Hager, G.D.; Xie, X. Adversarial deep structured nets for mass segmentation from mammograms. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 847–850. [Google Scholar] [CrossRef] [Green Version]
  168. Wang, R.; Ma, Y.; Sun, W.; Guo, Y.; Wang, W.; Qi, Y.; Gong, X. Multi-level nested pyramid network for mass segmentation in mammograms. Neurocomputing 2019, 363, 313–320. [Google Scholar] [CrossRef]
  169. Saffari, N.; Rashwan, H.A.; Abdel-Nasser, M.; Kumar Singh, V.; Arenas, M.; Mangina, E.; Herrera, B.; Puig, D. Fully automated breast density segmentation and classification using deep learning. Diagnostics 2020, 10, 988. [Google Scholar] [CrossRef]
  170. Ahmed, L.; Iqbal, M.M.; Aldabbas, H.; Khalid, S.; Saleem, Y.; Saeed, S. Images data practices for Semantic Segmentation of Breast Cancer using Deep Neural Network. J. Ambient Intell. Humaniz. Comput. 2020, 1–17. [Google Scholar] [CrossRef]
  171. Buda, M.; Saha, A.; Walsh, R.; Ghate, S.; Li, N.; Święcicki, A.; Lo, J.Y.; Mazurowski, M.A. Detection of masses and architectural distortions in digital breast tomosynthesis: A publicly available dataset of 5,060 patients and a deep learning model. arXiv 2020, arXiv:2011.07995. [Google Scholar]
  172. Cheng, Y.; Gao, Y.; Xie, L.; Xie, X.; Lin, W. Spatial Enhanced Rotation Aware Network for Breast Mass Segmentation in Digital Mammogram. IEEE Access 2020, 1. [Google Scholar] [CrossRef]
  173. Chen, J.; Chen, L.; Wang, S.; Chen, P. A Novel Multi-Scale Adversarial Networks for Precise Segmentation of X-ray Breast Mass. IEEE Access 2020, 8, 103772–103781. [Google Scholar] [CrossRef]
  174. Soleimani, H.; Michailovich, O.V. On Segmentation of Pectoral Muscle in Digital Mammograms by Means of Deep Learning. IEEE Access 2020, 8, 204173–204182. [Google Scholar] [CrossRef]
  175. Al-Antari, M.A.; Han, S.-M.; Kim, T.-S. Evaluation of deep learning detection and classification towards computer-aided diagnosis of breast lesions in digital X-ray mammograms. Comput. Methods Programs Biomed. 2020, 196, 105584. [Google Scholar] [CrossRef] [PubMed]
  176. Li, Y.; Zhang, L.; Chen, H.; Cheng, L. Mass detection in mammograms by bilateral analysis using convolution neural network. Comput. Methods Programs Biomed. 2020, 195, 105518. [Google Scholar] [CrossRef]
  177. Peng, J.; Bao, C.; Hu, C.; Wang, X.; Jian, W.; Liu, W. Automated mammographic mass detection using deformable convolution and multiscale features. Med. Biol. Eng. Comput. 2020, 58, 1405–1417. [Google Scholar] [CrossRef]
  178. Kavitha, T.; Mathai, P.P.; Karthikeyan, C.; Ashok, M.; Kohar, R.; Avanija, J.; Neelakandan, S. Deep Learning Based Capsule Neural Network Model for Breast Cancer Diagnosis Using Mammogram Images. Interdiscip. Sci. Comput. Life Sci. 2021, 14, 113–129. [Google Scholar] [CrossRef]
  179. Shoshan, Y.; Zlotnick, A.; Ratner, V.; Khapun, D.; Barkan, E.; Gilboa-Solomon, F. Beyond Non-maximum Suppression—Detecting Lesions in Digital Breast Tomosynthesis Volumes. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021; Springer: Cham, Switzerland, 2021; pp. 772–781. [Google Scholar] [CrossRef]
  180. Hossain, B.; Nishikawa, R.M.; Lee, J. Developing breast lesion detection algorithms for Digital Breast Tomosynthesis: Leveraging false positive findings. Med. Phys. 2022. [Google Scholar] [CrossRef]
  181. Hossain, B.; Nishikawa, R.M.; Lee, J. Improving lesion detection algorithm in digital breast tomosynthesis leveraging ensemble cross-validation models with multi-depth levels. In Medical Imaging 2022: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2022; Volume 12033, pp. 91–97. [Google Scholar] [CrossRef]
  182. Atrey, K.; Singh, B.K.; Roy, A.; Bodhey, N.K. Real-time automated segmentation of breast lesions using CNN-based deep learning paradigm: Investigation on mammogram and ultrasound. Int. J. Imaging Syst. Technol. 2021, 32, 1084–1100. [Google Scholar] [CrossRef]
  183. Shen, S.; Zhou, Y.; Xu, Y.; Zhang, B.; Duan, X.; Huang, R.; Li, B.; Shi, Y.; Shao, Z.; Liao, H.; et al. A multi-centre randomised trial comparing ultrasound vs mammography for screening breast cancer in high-risk Chinese women. Br. J. Cancer 2015, 112, 998–1004. [Google Scholar] [CrossRef] [Green Version]
  184. Han, S.; Kang, H.-K.; Jeong, J.-Y.; Park, M.-H.; Kim, W.; Bang, W.-C.; Seong, Y.-K. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys. Med. Biol. 2017, 62, 7714. [Google Scholar] [CrossRef]
  185. Shi, J.; Zhou, S.; Liu, X.; Zhang, Q.; Lu, M.; Wang, T. Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset. Neurocomputing 2016, 194, 87–94. [Google Scholar] [CrossRef]
  186. Byra, M.; Galperin, M.; Ojeda-Fournier, H.; Olson, L.; O’Boyle, M.; Comstock, C.; Andre, M. Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion. Med. Phys. 2018, 46, 746–755. [Google Scholar] [CrossRef] [PubMed]
  187. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  188. Shi, X.; Cheng, H.D.; Hu, L.; Ju, W.; Tian, J. Detection and classification of masses in breast ultrasound images. Digit. Signal Process. 2010, 20, 824–836. [Google Scholar] [CrossRef]
  189. Fujioka, T.; Kubota, K.; Mori, M.; Kikuchi, Y.; Katsuta, L.; Kasahara, M.; Oda, G.; Ishiba, T.; Nakagawa, T.; Tateishi, U. Distinction between benign and malignant breast masses at breast ultrasound using deep learning method with convolutional neural network. Jpn. J. Radiol. 2019, 37, 466–472. [Google Scholar] [CrossRef] [PubMed]
  190. Tanaka, H.; Chiu, S.-W.; Watanabe, T.; Kaoku, S.; Yamaguchi, T. Computer-aided diagnosis system for breast ultrasound images using deep learning. Phys. Med. Biol. 2019, 64, 235013. [Google Scholar] [CrossRef]
  191. Fujioka, T.; Kubota, K.; Mori, M.; Katsuta, L.; Kikuchi, Y.; Kimura, K.; Kimura, M.; Adachi, M.; Oda, G.; Nakagawa, T.; et al. Virtual Interpolation Images of Tumor Development and Growth on Breast Ultrasound Image Synthesis with Deep Convolutional Generative Adversarial Networks. J. Ultrasound Med. 2020, 40, 61–69. [Google Scholar] [CrossRef]
  192. Liu, B.; Cheng, H.D.; Huang, J.; Tian, J.; Tang, X.; Liu, J. Fully automatic and segmentation-robust classification of breast tumors based on local texture analysis of ultrasound images. Pattern Recognit. 2010, 43, 280–298. [Google Scholar] [CrossRef]
  193. Zhang, X.; Lin, X.; Zhang, Z.; Dong, L.; Sun, X.; Sun, D.; Yuan, K. Artificial Intelligence Medical Ultrasound Equipment: Application of Breast Lesions Detection. Ultrason. Imaging 2020, 42, 191–202. [Google Scholar] [CrossRef]
  194. Chiang, T.-C.; Huang, Y.-S.; Chen, R.-T.; Huang, C.-S.; Chang, R.-F. Tumor Detection in Automated Breast Ultrasound Using 3-D CNN and Prioritized Candidate Aggregation. IEEE Trans. Med. Imaging 2018, 38, 240–249. [Google Scholar] [CrossRef]
  195. Moon, W.K.; Lee, Y.; Ke, H.-H.; Lee, S.H.; Huang, C.-S.; Chang, R.-F. Computer-aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks. Comput. Methods Programs Biomed. 2020, 190, 105361. [Google Scholar] [CrossRef] [PubMed]
  196. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. Available online: (accessed on 17 March 2021).
  197. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
  198. Huang, Y.; Han, L.; Dou, H.; Luo, H.; Yuan, Z.; Liu, Q.; Zhang, J.; Yin, G. Two-stage CNNs for computerized BI-RADS categorization in breast ultrasound images. Biomed. Eng. Online 2019, 18, 8. [Google Scholar] [CrossRef] [Green Version]
  199. Cao, Z.; Duan, L.; Yang, G.; Yue, T.; Chen, Q. An experimental study on breast lesion detection and classification from ultrasound images using deep learning architectures. BMC Med. Imaging 2019, 19, 51. [Google Scholar] [CrossRef]
  200. Kim, J.; Kim, H.J.; Kim, C.; Lee, J.H.; Kim, K.W.; Park, Y.M.; Kim, H.W.; Ki, S.Y.; Kim, Y.M.; Kim, W.H. Weakly-supervised deep learning for ultrasound diagnosis of breast cancer. Sci. Rep. 2021, 11, 24382. [Google Scholar] [CrossRef] [PubMed]
  201. Choi, J.S.; Han, B.-K.; Ko, E.S.; Bae, J.M.; Ko, E.Y.; Song, S.H.; Kwon, M.-R.; Shin, J.H.; Hahn, S.Y. Effect of a Deep Learning Framework-Based Computer-Aided Diagnosis System on the Diagnostic Performance of Radiologists in Differentiating between Malignant and Benign Masses on Breast Ultrasonography. Korean J. Radiol. 2019, 20, 749–758. [Google Scholar] [CrossRef]
  202. Park, H.J.; Kim, S.M.; La Yun, B.; Jang, M.; Kim, B.; Jang, J.Y.; Lee, J.Y.; Lee, S.H. A computer-aided diagnosis system using artificial intelligence for the diagnosis and characterization of breast masses on ultrasound: Added value for the inexperienced breast radiologist. Medicine 2019, 98, 546–552. [Google Scholar] [CrossRef] [PubMed]
  203. Xiao, M.; Zhao, C.; Zhu, Q.; Zhang, J.; Liu, H.; Li, J.; Jiang, Y. An investigation of the classification accuracy of a deep learning framework-based computer-aided diagnosis system in different pathological types of breast lesions. J. Thorac. Dis. 2019, 11, 5023. [Google Scholar] [CrossRef]
  204. Byra, M.; Sznajder, T.; Korzinek, D.; Piotrzkowska-Wróblewska, H.; Dobruch-Sobczak, K.; Nowicki, A.; Marasek, K. Impact of ultrasound image reconstruction method on breast lesion classification with deep learning. In Iberian Conference on Pattern Recognition and Image Analysis; Springer: Berlin/Heidelberg, Germany, 2019; pp. 41–52. [Google Scholar]
  205. Hijab, A.; Rushdi, M.A.; Gomaa, M.M.; Eldeib, A. Breast Cancer Classification in Ultrasound Images using Transfer Learning. In Proceedings of the 2019 the Fifth International Conference on Advances in Biomedical Engineering (ICABME), Tripoli, Lebanon, 17–19 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
  206. Zhang, Q.; Song, S.; Xiao, Y.; Chen, S.; Shi, J.; Zheng, H. Dual-mode artificially-intelligent diagnosis of breast tumours in shear-wave elastography and B-mode ultrasound using deep polynomial networks. Med. Eng. Phys. 2019, 64, 1–6. [Google Scholar] [CrossRef]
  207. Fujioka, T.; Katsuta, L.; Kubota, K.; Mori, M.; Kikuchi, Y.; Kato, A.; Oda, G.; Nakagawa, T.; Kitazume, Y.; Tateishi, U. Classification of breast masses on ultrasound shear wave elastography using convolutional neural networks. Ultrason. Imaging 2020, 42, 213–220. [Google Scholar] [CrossRef]
  208. Wu, J.-X.; Chen, P.-Y.; Lin, C.-H.; Chen, S.; Shung, K.K. Breast Benign and Malignant Tumors Rapidly Screening by ARFI-VTI Elastography and Random Decision Forests Based Classifier. IEEE Access 2020, 8, 54019–54034. [Google Scholar] [CrossRef]
  209. Wu, J.-X.; Liu, H.-C.; Chen, P.-Y.; Lin, C.-H.; Chou, Y.-H.; Shung, K.K. Enhancement of ARFI-VTI Elastography Images in Order to Preliminary Rapid Screening of Benign and Malignant Breast Tumors Using Multilayer Fractional-Order Machine Vision Classifier. IEEE Access 2020, 8, 164222–164237. [Google Scholar] [CrossRef]
  210. Gong, B.; Shen, L.; Chang, C.; Zhou, S.; Zhou, W.; Li, S.; Shi, J. Bi-modal ultrasound breast cancer diagnosis via multi-view deep neural network svm. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); IEEE: Piscataway, NJ, USA, 2020; pp. 1106–1110. [Google Scholar]
  211. Zhang, X.; Liang, M.; Yang, Z.; Zheng, C.; Wu, J.; Ou, B.; Li, H.; Wu, X.; Luo, B.; Shen, J. Deep Learning-Based Radiomics of B-Mode Ultrasonography and Shear-Wave Elastography: Improved Performance in Breast Mass Classification. Front. Oncol. 2020, 10, 1621. [Google Scholar] [CrossRef] [PubMed]
  212. Yousef Kalaf, E.; Jodeiri, A.; Kamaledin Setarehdan, S.; Lin, N.W.; Rahman, K.B.; Aishah Taib, N.; Dhillon, S.K. Classification of breast cancer lesions in ultrasound images by using attention layer and loss ensembles in deep convolutional neural networks. arXiv 2021, arXiv:2102.11519. [Google Scholar]
  213. Misra, S.; Jeon, S.; Managuli, R.; Lee, S.; Kim, G.; Yoon, C.; Lee, S.; Barr, R.G.; Kim, C. Bi-Modal Transfer Learning for Classifying Breast Cancers via Combined B-Mode and Ultrasound Strain Imaging. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 69, 222–232. [Google Scholar] [CrossRef]
  214. Vakanski, A.; Xian, M.; Freer, P.E. Attention-Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images. Ultrasound Med. Biol. 2020, 46, 2819–2833. [Google Scholar] [CrossRef]
  215. Byra, M.; Jarosik, P.; Szubert, A.; Galperin, M.; Ojeda-Fournier, H.; Olson, L.; O’Boyle, M.; Comstock, C.; Andre, M. Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network. Biomed. Signal Process. Control 2020, 61, 102027. [Google Scholar] [CrossRef]
  216. Singh, V.K.; Abdel-Nasser, M.; Akram, F.; Rashwan, H.A.; Sarker, M.K.; Pandey, N.; Romani, S.; Puig, D. Breast tumor segmentation in ultrasound images using contextual-information-aware deep adversarial learning framework. Expert Syst. Appl. 2020, 162, 113870. [Google Scholar] [CrossRef]
  217. Han, L.; Huang, Y.; Dou, H.; Wang, S.; Ahamad, S.; Luo, H.; Liu, Q.; Fan, J.; Zhang, J. Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network. Comput. Methods Programs Biomed. 2019, 189, 105275. [Google Scholar] [CrossRef]
  218. Wang, K.; Liang, S.; Zhang, Y. Residual Feedback Network for Breast Lesion Segmentation in Ultrasound Image. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021; Springer: Cham, Switzerland, 2021; pp. 471–481. [Google Scholar] [CrossRef]
  219. Wang, K.; Liang, S.; Zhong, S.; Feng, Q.; Ning, Z.; Zhang, Y. Breast ultrasound image segmentation: A coarse-to-fine fusion convolutional neural network. Med. Phys. 2021, 48, 4262–4278. [Google Scholar] [CrossRef]
  220. Li, Y.; Liu, Y.; Huang, L.; Wang, Z.; Luo, J. Deep weakly-supervised breast tumor segmentation in ultrasound images with explicit anatomical constraints. Med. Image Anal. 2022, 76, 102315. [Google Scholar] [CrossRef] [PubMed]
  221. Byra, M.; Jarosik, P.; Dobruch-Sobczak, K.; Klimonda, Z.; Piotrzkowska-Wroblewska, H.; Litniewski, J.; Nowicki, A. Joint segmentation and classification of breast masses based on ultrasound radio-frequency data and convolutional neural networks. Ultrasonics 2022, 121, 106682. [Google Scholar] [CrossRef] [PubMed]
  222. Jabeen, K.; Khan, M.A.; Alhaisoni, M.; Tariq, U.; Zhang, Y.-D.; Hamza, A.; Mickus, A.; Damaševičius, R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. Sensors 2022, 22, 807. [Google Scholar] [CrossRef] [PubMed]
  223. Berg, W.A.; Zhang, Z.; Lehrer, D.; Jong, R.A.; Pisano, E.D.; Barr, R.G.; Böhm-Vélez, M.; Mahoney, M.C.; Evans, W.P.; Larsen, L.H.; et al. Detection of breast cancer with addition of annual screening ultrasound or a single screening MRI to mammography in women with elevated breast cancer risk. JAMA 2012, 307, 1394–1404. [Google Scholar]
  224. Maicas, G.; Carneiro, G.; Bradley, A.P.; Nascimento, J.C.; Reid, I. Deep reinforcement learning for active breast lesion detection from DCE-MRI. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; Springer: Cham, Switzerland, 2017; pp. 665–673. [Google Scholar]
  225. Zhou, J.; Zhang, Y.; Chang, K.; Lee, K.E.; Wang, O.; Li, J.; Lin, Y.; Pan, Z.; Chang, P.; Chow, D.; et al. Diagnosis of Benign and Malignant Breast Lesions on DCE-MRI by Using Radiomics and Deep Learning with Consideration of Peritumor Tissue. J. Magn. Reson. Imaging 2019, 51, 798–809. [Google Scholar] [CrossRef]
  226. Daimiel Naranjo, I.; Gibbs, P.; Reiner, J.S.; Lo Gullo, R.; Thakur, S.B.; Jochelson, M.S.; Thakur, N.; Baltzer, P.A.; Helbich, T.H.; Pinker, K. Breast lesion classification with multiparametric breast MRI using radiomics and machine learning: A comparison with radiologists’ performance. Cancers 2022, 14, 1743. [Google Scholar] [CrossRef]
  227. Antropova, N.; Huynh, B.Q.; Giger, M.L. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med. Phys. 2017, 44, 5162–5171. [Google Scholar] [CrossRef]
  228. Truhn, D.; Schrading, S.; Haarburger, C.; Schneider, H.; Merhof, D.; Kuhl, C. Radiomic versus Convolutional Neural Networks Analysis for Classification of Contrast-enhancing Lesions at Multiparametric Breast MRI. Radiology 2019, 290, 290–297. [Google Scholar] [CrossRef]
  229. Zhou, J.; Luo, L.; Dou, Q.; Chen, H.; Chen, C.; Li, G.; Jiang, Z.; Heng, P.A. Weakly supervised 3D deep learning for breast cancer classification and localization of the lesions in MR images. J. Magn. Reson. Imaging 2019, 50, 1144–1151. [Google Scholar] [CrossRef]
  230. Feng, H.; Cao, J.; Wang, H.; Xie, Y.; Yang, D.; Feng, J.; Chen, B. A knowledge-driven feature learning and integration method for breast cancer diagnosis on multi-sequence MRI. Magn. Reson. Imaging 2020, 69, 40–48. [Google Scholar] [CrossRef]
  231. Fujioka, T.; Yashima, Y.; Oyama, J.; Mori, M.; Kubota, K.; Katsuta, L.; Kimura, K.; Yamaga, E.; Oda, G.; Nakagawa, T.; et al. Deep-learning approach with convolutional neural network for classification of maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging. Magn. Reson. Imaging 2020, 75, 1–8. [Google Scholar] [CrossRef]
  232. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; p. 9. [Google Scholar]
  233. Ayatollahi, F.; Shokouhi, S.B.; Mann, R.M.; Teuwen, J. Automatic breast lesion detection in ultrafast DCE-MRI using deep learning. Med. Phys. 2021, 48, 5897–5907. [Google Scholar] [CrossRef] [PubMed]
  234. Zhang, J.; Saha, A.; Zhu, Z.; Mazurowski, M.A. Hierarchical Convolutional Neural Networks for Segmentation of Breast Tumors in MRI With Application to Radiogenomics. IEEE Trans. Med. Imaging 2019, 38, 435–447. [Google Scholar] [CrossRef] [PubMed]
  235. Piantadosi, G.; Marrone, S.; Galli, A.; Sansone, M.; Sansone, C. DCE-MRI Breast Lesions Segmentation with a 3TP U-Net Deep Convolutional Neural Network. In Proceedings of the 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), Cordoba, Spain, 5–7 June 2019; pp. 628–633. [Google Scholar] [CrossRef]
  236. Lu, W.; Wang, Z.; He, Y.; Yu, H.; Xiong, N.; Wei, J. Breast Cancer Detection Based on Merging Four Modes MRI Using Convolutional Neural Networks. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1035–1039. [Google Scholar] [CrossRef]
  237. Zhu, Z.; Albadawy, E.; Saha, A.; Zhang, J.; Harowicz, M.R.; Mazurowski, M.A. Deep learning for identifying radiogenomic associations in breast cancer. Comput. Biol. Med. 2019, 109, 85–90. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  238. Ha, R.; Mutasa, S.; Karcich, J.; Gupta, N.; Van Sant, E.P.; Nemer, J.; Sun, M.; Chang, P.; Liu, M.Z.; Jambawalikar, S. Predicting Breast Cancer Molecular Subtype with MRI Dataset Utilizing Convolutional Neural Network Algorithm. J. Digit. Imaging 2019, 32, 276–282. [Google Scholar] [CrossRef] [PubMed]
  239. Ha, R.; Chin, C.; Karcich, J.; Liu, M.Z.; Chang, P.; Mutasa, S.; Van Sant, E.P.; Wynn, R.T.; Connolly, E.; Jambawalikar, S. Prior to Initiation of Chemotherapy, Can We Predict Breast Tumor Response? Deep Learning Convolutional Neural Networks Approach Using a Breast MRI Tumor Dataset. J. Digit. Imaging 2018, 32, 693–701. [Google Scholar] [CrossRef]
  240. Fang, Y.; Zhao, J.; Hu, L.; Ying, X.; Pan, Y.; Wang, X. Image classification toward breast cancer using deeply-learned quality features. J. Vis. Commun. Image Represent. 2019, 64, 102609. [Google Scholar] [CrossRef]
  241. Zheng, J.; Lin, D.; Gao, Z.; Wang, S.; He, M.; Fan, J. Deep Learning Assisted Efficient AdaBoost Algorithm for Breast Cancer Detection and Early Diagnosis. IEEE Access 2020, 8, 96946–96954. [Google Scholar] [CrossRef]
  242. Holste, G.; Partridge, S.C.; Rahbar, H.; Biswas, D.; Lee, C.I.; Alessio, A.M. End-to-end learning of fused image and non-image features for improved breast cancer classification from mri. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 3294–3303. [Google Scholar]
  243. Eskreis-Winkler, S.; Onishi, N.; Pinker, K.; Reiner, J.S.; Kaplan, J.; Morris, E.A.; Sutton, E.J. Using Deep Learning to Improve Nonsystematic Viewing of Breast Cancer on MRI. J. Breast Imaging 2021, 3, 201–207. [Google Scholar] [CrossRef]
  244. Işın, A.; Direkoğlu, C.; Şah, M. Review of MRI-based brain tumor image segmentation using deep learning methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef] [Green Version]
  245. Liu, M.Z.; Swintelski, C.; Sun, S.; Siddique, M.; Desperito, E.; Jambawalikar, S.; Ha, R. Weakly Supervised Deep Learning Approach to Breast MRI Assessment. Acad. Radiol. 2021, 29, S166–S172. [Google Scholar] [CrossRef] [PubMed]
  246. Bie, C.; Li, Y.; Zhou, Y.; Bhujwalla, Z.M.; Song, X.; Liu, G.; van Zijl, P.C.; Yadav, N.N. Deep learning-based classification of preclinical breast cancer tumor models using chemical exchange saturation transfer magnetic resonance imaging. NMR Biomed. 2022, 35, e4626. [Google Scholar] [CrossRef]
  247. Jing, X.; Wielema, M.; Cornelissen, L.J.; van Gent, M.; Iwema, W.M.; Zheng, S.; Sijens, P.E.; Oudkerk, M.; Dorrius, M.D.; van Ooijen, P. Using deep learning to safely exclude lesions with only ultrafast breast MRI to shorten acquisition and reading time. Eur. Radiol. 2022. [Google Scholar] [CrossRef] [PubMed]
  248. Wu, Y.; Wu, J.; Dou, Y.; Rubert, N.; Wang, Y.; Deng, J. A deep learning fusion model with evidence-based confidence level analysis for differentiation of malignant and benign breast tumors using dynamic contrast enhanced MRI. Biomed. Signal Process. Control 2021, 72, 103319. [Google Scholar] [CrossRef]
  249. Verburg, E.; van Gils, C.H.; van der Velden, B.H.M.; Bakker, M.F.; Pijnappel, R.M.; Veldhuis, W.B.; Gilhuijs, K.G.A. Deep Learning for Automated Triaging of 4581 Breast MRI Examinations from the DENSE Trial. Radiology 2022, 302, 29–36. [Google Scholar] [CrossRef] [PubMed]
  250. Dutta, K.; Roy, S.; Whitehead, T.; Luo, J.; Jha, A.; Li, S.; Quirk, J.; Shoghi, K. Deep Learning Segmentation of Triple-Negative Breast Cancer (TNBC) Patient Derived Tumor Xenograft (PDX) and Sensitivity of Radiomic Pipeline to Tumor Probability Boundary. Cancers 2021, 13, 3795. [Google Scholar] [CrossRef]
  251. Carvalho, E.D.; Silva, R.R.V.; Mathew, M.J.; Araujo, F.H.D.; de Carvalho Filho, A.O. Tumor Segmentation in Breast DCE- MRI Slice Using Deep Learning Methods. In Proceedings of the 2021 IEEE Symposium on Computers and Communications (ISCC), Athens, Greece, 5–8 September 2021; pp. 1–6. [Google Scholar] [CrossRef]
  252. Wang, H.; Cao, J.; Feng, J.; Xie, Y.; Yang, D.; Chen, B. Mixed 2D and 3D convolutional network with multi-scale context for lesion segmentation in breast DCE-MRI. Biomed. Signal Process. Control 2021, 68, 102607. [Google Scholar] [CrossRef]
  253. Nowakowska, S.; Borkowski, K.; Ruppert, C.; Hejduk, P.; Ciritsis, A.; Landsmann, A.; Macron, M.; Berger, N.; Boss, A.; Rossi, C. Deep Learning for Automatic Segmentation of Background Parenchymal Enhancement in Breast MRI. In Proceedings of the Medical Imaging with Deep Learning (MIDL), Zürich, Switzerland, 6–8 July 2022. [Google Scholar]
  254. Khaled, R.; Vidal, J.; Vilanova, J.C.; Martí, R. A U-Net Ensemble for breast lesion segmentation in DCE MRI. Comput. Biol. Med. 2021, 140, 105093. [Google Scholar] [CrossRef]
  255. Yue, W.; Zhang, H.; Zhou, J.; Li, G.; Tang, Z.; Sun, Z.; Cai, J.; Tian, N.; Gao, S.; Dong, J.; et al. Deep learning-based automatic segmentation for size and volumetric measurement of breast cancer on magnetic resonance imaging. Front. Oncol. 2022, 12, 984626. [Google Scholar] [CrossRef]
  256. Rahimpour, M.; Martin, M.-J.S.; Frouin, F.; Akl, P.; Orlhac, F.; Koole, M.; Malhaire, C. Visual ensemble selection of deep convolutional neural networks for 3D segmentation of breast tumors on dynamic contrast enhanced MRI. Eur. Radiol. 2022. [Google Scholar] [CrossRef]
  257. Zhu, J.; Geng, J.; Shan, W.; Zhang, B.; Shen, H.; Dong, X.; Liu, M.; Li, X.; Cheng, L. Development and validation of a deep learning model for breast lesion segmentation and characterization in multiparametric MRI. Front. Oncol. 2022, 12, 946580. [Google Scholar] [CrossRef] [PubMed]
  258. Wang, D.; Khosla, A.; Gargeya, R.; Irshad, H.; Beck, A.H. Deep learning for identifying metastatic breast cancer. arXiv 2016, arXiv:1606.05718. [Google Scholar]
  259. Bayramoglu, N.; Kannala, J.; Heikkilä, J. Deep learning for magnification independent breast cancer histopathology image classification. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancún, Mexico, 4–8 December 2016. [Google Scholar] [CrossRef] [Green Version]
  260. Xu, B.; Liu, J.; Hou, X.; Liu, B.; Garibaldi, J.; Ellis, I.O.; Green, A.; Shen, L.; Qiu, G. Look, investigate, and classify: A deep hybrid attention method for breast cancer classification. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019); IEEE: Piscataway, NJ, USA, 2019; pp. 914–918. [Google Scholar]
  261. Xie, J.; Liu, R.; Luttrell, J.I.; Zhang, C. Deep Learning Based Analysis of Histopathological Images of Breast Cancer. Front. Genet. 2019, 10, 80. Available online: (accessed on 7 August 2022). [CrossRef] [PubMed] [Green Version]
  262. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv 2016, arXiv:1602.07261. [Google Scholar] [CrossRef]
  263. Khan, S.; Islam, N.; Jan, Z.; Din, I.U.; Rodrigues, J.J.P.C. A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recognit. Lett. 2019, 125, 1–6. [Google Scholar] [CrossRef]
  264. Yan, R.; Ren, F.; Wang, Z.; Wang, L.; Zhang, T.; Liu, Y.; Rao, X.; Zheng, C.; Zhang, F. Breast cancer histopathological image classification using a hybrid deep neural network. Methods 2019, 173, 52–60. [Google Scholar] [CrossRef]
  265. Thuy, M.B.H.; Hoang, V.T. Fusing of deep learning, transfer learning and gan for breast cancer histopathological image classification. In Proceedings of the International Conference on Computer Science, Applied Mathematics and Applications, Hanoi, Vietnam, 19–20 December 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 255–266. [Google Scholar]
  266. Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv 2019. [Google Scholar] [CrossRef]
  267. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar] [CrossRef] [Green Version]
  268. Cruz-Roa, A.; Gilmore, H.; Basavanhally, A.; Feldman, M.; Ganesan, S.; Shih, N.; Tomaszewski, J.; Madabhushi, A.; González, F. High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: Application to invasive breast cancer detection. PLoS ONE 2018, 13, e0196828. [Google Scholar] [CrossRef]
  269. Albarqouni, S.; Christoph, B.; Felix, A.; Vasileios, B.; Stefanie, D.; Nassir, N. Aggnet: Deep learning from crowds for mitosis detection in breast cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1313–1321. [Google Scholar] [CrossRef]
  270. Naylor, P.; Laé, M.; Reyal, F.; Walter, T. Nuclei segmentation in histopathology images using deep neural networks. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 933–936. [Google Scholar] [CrossRef]
  271. Priego-Torres, B.M.; Sanchez-Morillo, D.; Fernandez-Granero, M.A.; Garcia-Rojo, M. Automatic segmentation of whole-slide H&E stained breast histopathology images using a deep convolutional neural network architecture. Expert Syst. Appl. 2020, 151, 113387. [Google Scholar] [CrossRef]
  272. Ming, Y.; Wu, N.; Qian, T.; Li, X.; Wan, D.Q.; Li, C.; Li, Y.; Wu, Z.; Wang, X.; Liu, J.; et al. Progress and Future Trends in PET/CT and PET/MRI Molecular Imaging Approaches for Breast Cancer. Front. Oncol. 2020, 10, 1301. [Google Scholar] [CrossRef] [PubMed]
  273. Macedo, F.; Ladeira, K.; Pinho, F.; Saraiva, N.; Bonito, N.; Pinto, L.; Gonçalves, F. Bone metastases: An overview. Oncol. Rev. 2017, 11, 321. [Google Scholar] [PubMed]
  274. Papandrianos, N.; Papageorgiou, E.; Anagnostis, A.; Feleki, A. A Deep-Learning Approach for Diagnosis of Metastatic Breast Cancer in Bones from Whole-Body Scans. Appl. Sci. 2020, 10, 997. [Google Scholar] [CrossRef] [Green Version]
  275. Weber, M.; Kersting, D.; Umutlu, L.; Schäfers, M.; Rischpler, C.; Fendler, W.P.; Buvat, I.; Herrmann, K.; Seifert, R. Just another “Clever Hans”? Neural networks and FDG PET-CT to predict the outcome of patients with breast cancer. Eur. J. Pediatr. 2021, 48, 3141–3150. [Google Scholar] [CrossRef] [PubMed]
  276. Zainudin, Z.; Shamsuddin, S.M.; Hasan, S. Deep Layer CNN Architecture for Breast Cancer Histopathology Image Detection. In Proceedings of the International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2019), Cairo, Egypt, 28–30 March 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 43–51. [Google Scholar] [CrossRef]
  277. Li, C.; Wang, X.; Liu, W.; Latecki, L.J.; Wang, B.; Huang, J. Weakly supervised mitosis detection in breast histopathology images using concentric loss. Med. Image Anal. 2019, 53, 165–178. [Google Scholar] [CrossRef] [PubMed]
  278. Das, D.K.; Dutta, P.K. Efficient automated detection of mitotic cells from breast histological images using deep convolution neutral network with wavelet decomposed patches. Comput. Biol. Med. 2018, 104, 29–42. [Google Scholar] [CrossRef]
  279. Gour, M.; Jain, S.; Kumar, T.S. Residual learning based CNN for breast cancer histopathological image classification. Int. J. Imaging Syst. Technol. 2020, 30, 621–635. [Google Scholar] [CrossRef]
  280. Saxena, S.; Shukla, S.; Gyanchandani, M. Pre-trained convolutional neural networks as feature extractors for diagnosis of breast cancer using histopathology. Int. J. Imaging Syst. Technol. 2020, 30, 577–591. [Google Scholar] [CrossRef]
  281. Hirra, I.; Ahmad, M.; Hussain, A.; Ashraf, M.U.; Saeed, I.A.; Qadri, S.F.; Alghamdi, A.M.; Alfakeeh, A.S. Breast Cancer Classification from Histopathological Images Using Patch-Based Deep Learning Modeling. IEEE Access 2021, 9, 24273–24287. [Google Scholar] [CrossRef]
  282. Senan, E.M.; Alsaade, F.W.; Al-mashhadani, M.I.A.; Aldhyani, T.H.H.; Al-Adhaileh, M.H. Classification of Histopathological Images for Early Detection of Breast Cancer Using Deep Learning. J. Appl. Sci. Eng. 2021, 24, 323–329. [Google Scholar] [CrossRef]
  283. Zewdie, E.T.; Tessema, A.W.; Simegn, G.L. Classification of breast cancer types, sub-types and grade from histopathological images using deep learning technique. Heal. Technol. 2021, 11, 1277–1290. [Google Scholar] [CrossRef]
  284. Kushwaha, S.; Adil, M.; Abuzar, M.; Nazeer, A.; Singh, S.K. Deep learning-based model for breast cancer histopathology image classification. In Proceedings of the 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 28–30 April 2021; pp. 539–543. [Google Scholar] [CrossRef]
  285. Gheshlaghi, S.H.; Kan, C.N.E.; Ye, D.H. Breast Cancer Histopathological Image Classification with Adversarial Image Synthesis. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual Conference, 1–5 November 2021; pp. 3387–3390. [Google Scholar] [CrossRef]
  286. Reshma, V.K.; Arya, N.; Ahmad, S.S.; Wattar, I.; Mekala, S.; Joshi, S.; Krah, D. Detection of Breast Cancer Using Histopathological Image Classification Dataset with Deep Learning Techniques. BioMed Res. Int. 2022, 2022, e8363850. [Google Scholar] [CrossRef] [PubMed]
  287. Joseph, A.A.; Abdullahi, M.; Junaidu, S.B.; Ibrahim, H.H.; Chiroma, H. Improved multi-classification of breast cancer histopathological images using handcrafted features and deep neural network (dense layer). Intell. Syst. Appl. 2022, 14, 200066. [Google Scholar] [CrossRef]
  288. Ahmad, N.; Asghar, S.; Gillani, S.A. Transfer learning-assisted multi-resolution breast cancer histopathological images classification. Vis. Comput. 2021, 38, 2751–2770. [Google Scholar] [CrossRef]
  289. Mathew, T.; Ajith, B.; Kini, J.R.; Rajan, J. Deep learning-based automated mitosis detection in histopathology images for breast cancer grading. Int. J. Imaging Syst. Technol. 2022, 32, 1192–1208. [Google Scholar] [CrossRef]
  290. Singh, S.; Kumar, R. Breast cancer detection from histopathology images with deep inception and residual blocks. Multimed. Tools Appl. 2021, 81, 5849–5865. [Google Scholar] [CrossRef]
  291. Mejbri, S.; Franchet, C.; Reshma, I.A.; Mothe, J.; Brousset, P.; Faure, E. Deep Analysis of CNN Settings for New Cancer whole-slide Histological Images Segmentation: The Case of Small Training Sets. In Proceedings of the 6th International conference on BioImaging (BIOIMAGING 2019), Prague, Czech Republic, 22–24 February 2019; pp. 120–128. [Google Scholar] [CrossRef]
  292. Guo, Z.; Liu, H.; Ni, H.; Wang, X.; Su, M.; Guo, W.; Wang, K.; Jiang, T.; Qian, Y. A Fast and Refined Cancer Regions Segmentation Framework in Whole-slide Breast Pathological Images. Sci. Rep. 2019, 9, 882. [Google Scholar] [CrossRef] [Green Version]
  293. Budginaitė, E.; Morkūnas, M.; Laurinavičius, A.; Treigys, P. Deep Learning Model for Cell Nuclei Segmentation and Lymphocyte Identification in Whole Slide Histology Images. Informatica 2021, 32, 23–40. [Google Scholar] [CrossRef]
  294. Pedersen, A.; Smistad, E.; Rise, T.V.; Dale, V.G.; Pettersen, H.S.; Nordmo, T.-A.S.; Bouget, D.; Reinertsen, I.; Valla, M. H2G-Net: A multi-resolution refinement approach for segmentation of breast cancer region in gigapixel histopathological images. Front. Med. 2022, 9, 971873. [Google Scholar] [CrossRef]
  295. Engstrøm, M.J.; Opdahl, S.; Hagen, A.I.; Romundstad, P.R.; Akslen, L.A.; Haugen, O.A.; Vatten, L.J.; Bofin, A.M. Molecular subtypes, histopathological grade and survival in a historic cohort of breast cancer patients. Breast Cancer Res. Treat. 2013, 140, 463–473. [Google Scholar] [CrossRef] [Green Version]
  296. Khalil, M.-A.; Lee, Y.-C.; Lien, H.-C.; Jeng, Y.-M.; Wang, C.-W. Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics 2022, 12, 990. [Google Scholar] [CrossRef] [PubMed]
  297. Yang, Q.; Liu, Y.; Cheng, Y.; Kang, Y.; Chen, T.; Yu, H. Federated Learning. Synth. Lect. Artif. Intell. Mach. Learn. 2019, 13, 1–207. [Google Scholar] [CrossRef]
  298. Zhang, W.; Deng, L.; Zhang, L. A Survey on Negative Transfer. IEEE Trans. Neural Netw. Learn. Syst. 2021, 13, 1–25. [Google Scholar]
  299. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv 2016, arXiv:1602.04938. Available online: (accessed on 20 September 2022).
  300. Wu, C.; Ma, S. A selective review of robust variable selection with applications in bioinformatics. Brief. Bioinform. 2015, 16, 873–883. [Google Scholar] [CrossRef]
Figure 2. Images of cancerous breast tissue by DBT imaging modality [61]. Reprinted/adapted with permission from [61]. 2021, Elsevier.
Figure 2. Images of cancerous breast tissue by DBT imaging modality [61]. Reprinted/adapted with permission from [61]. 2021, Elsevier.
Cancers 14 05334 g002
Figure 3. Ultrasound images from breast tissue for normal, benign, and malignant [80].
Figure 3. Ultrasound images from breast tissue for normal, benign, and malignant [80].
Cancers 14 05334 g003
Figure 4. Dense cancerous breast tissue images conducted by MRI method from different angles. (A) Normal; (B) malignant [82]. Reprinted/adapted with permission from [82]. 2011, Elsevier.
Figure 4. Dense cancerous breast tissue images conducted by MRI method from different angles. (A) Normal; (B) malignant [82]. Reprinted/adapted with permission from [82]. 2011, Elsevier.
Cancers 14 05334 g004
Figure 5. Images of the breast from H&E (haemotoxylin and eosin) stained image of a benign case provided by histopathology imaging modality [105]. Reprinted/adapted with permission from [105]. 2017, Elsevier.
Figure 5. Images of the breast from H&E (haemotoxylin and eosin) stained image of a benign case provided by histopathology imaging modality [105]. Reprinted/adapted with permission from [105]. 2017, Elsevier.
Cancers 14 05334 g005
Figure 6. Example of PET images for breast cancer analysis [118]. Reprinted/adapted with permission from [118]. 2021, Elsevier.
Figure 6. Example of PET images for breast cancer analysis [118]. Reprinted/adapted with permission from [118]. 2021, Elsevier.
Cancers 14 05334 g006
Figure 7. Schematic diagram of the proposed YOLO-based CAD system in [131]. Reprinted/adapted with permission from [131]. 2021, Elsevier.
Figure 7. Schematic diagram of the proposed YOLO-based CAD system in [131]. Reprinted/adapted with permission from [131]. 2021, Elsevier.
Cancers 14 05334 g007
Figure 10. Prediction of breast cancer grades from extracted patches from histopathology images via patch-wise LSTM architecture [264]. Reprinted/adapted with permission from [264]. 2019, Elsevier.
Figure 10. Prediction of breast cancer grades from extracted patches from histopathology images via patch-wise LSTM architecture [264]. Reprinted/adapted with permission from [264]. 2019, Elsevier.
Cancers 14 05334 g010
Table 1. Advantages and limitations of various imaging modalities.
Table 1. Advantages and limitations of various imaging modalities.
Imaging ModalitiesAdvantagesLimitations
  • More than 70% of studies (computational and experimental) for breast cancer analysis.
  • Time- and cost-effective approach for image capturing and processing compared to other modalities
  • No need for highly professional radiologists for diagnosis and cancer detection compared to other methods
  • Cannot capture micro-calcification because MMs are created via low-dose X-ray
  • Limited capability for diagnosis of cancer in dense breasts
  • Needs more testing for accurate diagnosis
  • Needs various pre-processing for classification because of considering many factors and structures such as the border of the breast, fibrous strands, hypertrophied lobules, etc. which may cause misunderstanding Problems in the visualization of cancer in high breast density
  • A very efficient approach in reducing false negative rates for diagnosis because of its capability in capturing images from different views and angles.
  • A highly safe and most efficient approach for a routine checkup because the US is a non-invasive method
  • Ability to detect invasive cancer areas
Highly recommended for the identification of breast lesion ROI because of its additional features such as color-coded SWE images
  • Capturing low-quality images for examination of the larger amount of tissues
  • Difficult to understand SWE images
  • Single Nakagami parametric image cannot detect cancerous tissues
Proper ROI estimation is very difficult because of the shadowing effect making the tumor contour unclear
  • Safe method due to no exposure to harmful ionizing radiation
  • Captures images with more detail
  • Captures more suspicious areas for further analysis compared to other modalities
Can be improved by adding contrast agents to represent images with more details
  • Misses some tumors but can be used as a complement of MMs
  • Increases body temperature
  • May lead to some allergies
Invasive method and dangerous
  • Produces color-coded images that help to detect cancer subtypes and early detection of cancer
  • Widely used in cancer diagnosis similar to MMs
  • Shows tissues in two forms including WSI and ROI extracted from WSI
  • Provides more reliable results for diagnosis than any other imaging modalities
  • ROI increases accuracy of cancer diagnosis and analysis
  • Can be stored for future analysis
  • Expensive and time-consuming method to analyze and need
  • Highly expert pathologist
  • It is tedious to extract ROI and analysis, so it may lead to a decrease in the accuracy of analysis because of fatigue
  • Analysis of HPs highly depends on many factors such as fixation, lab protocols, sample orientations, human expertise in tissue preparation, color variation
  • The hardest imaging modality for applying a DL approach for the classification of cancers, and it needs high computational resources for analysis
  • Increases cancer detection rate
  • Can find cancers that were entirely missed on MMs
  • Presents a unique opportunity for AI systems to help develop DBT-based practices from the ground up.
  • Captures a more detailed view of tissues by rotating the X-ray emitter to receive multiple images
  • Has great capability to distinguish small lesions which may obscure the projections obtained using MMs
  • Time consuming and expensive because of making 3D images
  • Lack of proper data curation and labeling
  • Decreasing accuracy of analysis when using 2D slices instead of 3D images
  • Looking only at 2D slices, it is still unclear whether AIModels operate better using abnormalities labeled
  • Using bounding boxes or tightly-drawn margins of lesions
  • DBT studies easily require more storage than MMs by order of
magnitude or more.
  • An efficient method in the analysis of small lesions
  • Great capability to detect metastasis at different sites and organs.
  • Checks up the entire patient for local recurrence, lymph node metastases, and distant metastases using a single injection of activity
  • Highly recommended for patients with dense breasts or implants
  • Poor detection rates for small or non-invasive breast cancers
  • Missed osteoblastic metastases showed lower metabolic activity
Table 2. Public datasets for different imaging modalities for breast cancer analysis.
Table 2. Public datasets for different imaging modalities for breast cancer analysis.
Imaging ModalityPublic DatasetLink of DatasetInformation about Dataset
accessed date: 25 September 2022
426 benign and 310 malignant
accessed date: 25 September 2022
1865 typical cases and 932 abnormal
accessed date: 25 September 2022
133 abnormal and 189 of normal class
accessed date: 25 September 2022
912 benign and 784 malignant
accessed date: 25 September 2022
410 malignant
accessed date: 25 September 2022
472 normal 278 abnormal
accessed date: 25 September 2022
48 benign 52 malignant
accessed date: 25 September 2022
620 benign 210 malignant
accessed date: 25 September 2022
200 benign 200 malignant
accessed date: 25 September 2022
110 benign 53 malignant
accessed date: 25 September 2022
42 malignant
accessed date: 25 September 2022
559 malignant
accessed date: 25 September 2022
328 malignant
accessed date: 25 September 2022
500 malignant
accessed date: 25 September 2022
267 normal 44 abnormal
accessed date: 25 September 2022
91 malignant
accessed date: 25 September 2022
2480 benign and 5429 malignant
accessed date: 25 September 2022
240 benign 160 malignant
accessed date: 25 September 2022
50 benign 23 malignant
accessed date: 25 September 2022
37 benign 38 malignant
ICPR 2012
accessed date: 25 September 2022
50 malignant
accessed date: 25 September 2022
162 malignant
accessed date: 25 September 2022
357 benign and 212 malignant
accessed date: 25 September 2022
173 malignant
accessed date: 25 September 2022
2031 normal 1974 malignant
accessed date: 25 September 2022
23 malignant
accessed date: 25 September 2022
1097 malignant
accessed date: 25 September 2022
22,032 DBT volume from 5610 subjects (89 malignant, 112 benign, 5129 normal)
Table 3. The summary of the studies that used MM and DBT datasets.
Table 3. The summary of the studies that used MM and DBT datasets.
Agnes et al. [146]2020ClassificationMultiscale All CNNMMMIASAcc = 96.47%
Shu et al. [156]2020ClassificationCNNMMINbreast CBIS-DDSMINbreast: Acc = 92.2%
CBIS: Acc = 76.7%
Singh et al. [150]2020ClassificationCNNFFDM and DBTPrivateFFDM: AUC = 0.9
DBT: AUC = 0.85
Boumaraf et al. [157]2020ClassificationDBN (Deep Belief Network)MMDDSMAcc = 84.5%
Matthews et al. [158]2021ClassificationTransfer learning based on ResNetDBTPrivateAUC = 0.9
Zhang et al. [159]2021ClassificationGNN (Graph Neural Network) + CNNMMMIASAcc = 96.1%
Li et al. [160]2021ClassificationSVM (Support Vector Machine)MMINbreastAcc = 84.6%
Saber et al. [161]2021ClassificationCNN/Transfer learningMMMIASAcc = 98.87%
F-score = 99.3%
Malebary et al. [162]2021ClassificationCNNMMDDSM
DDSM: Acc = 97%
MIAS: Acc = 97%
Li et al. [163]2021ClassificationCNN-RNN (Recurrent Neural Network)MMDDSMACC = 94.7%, Recall = 94.1% AUC = 0.968
Ueda et al. [164]2022ClassificationCNNMMPrivate
AUC = 0.93
Mota et al. [165]2022ClassificationCNNDBTVICTREAUC = 0.941
Bai et al. [166]2022ClassificationGCN (Graph Convolutional Network)DBTBCS-DBT
Acc = 84%
AUC = 0.87
Zhu et al. [167]2018Mass SegmentationFCN (Fully Convolutional Network) + CRF (Conditional Random Field)MMINbreast
INbreast: Dice = 90.97%
DDSM-BCRP: Dice = 91.3%
Wang et al. [168]2019Mass SegmentationMNPNet (Multi-Level Nested Pyramid Network)MMINbreast
INbreast: Dice = 91.1%
DDSM-BCRP: Dice = 91.69%
Saffari et al. [169]2020Dense tissue Segmentation/ClassificationcGAN and CNNMMINbreastS: Acc = 98%
C: Acc = 97.85%
Ahmed et al. [170]2020Tumor Segmentation/ClassificationDeepLab/mask RCNNMMMIAS
DeepLab: C: Acc = 95%
S: MAP = 72%
Mask RCNN: C: Acc = 98%
S: MAP = 80%
Buda et al. [171]2020Lesion detectionCNNDBTPrivateSensitivity = 65%
Cheng et al. [172]2020Mass SegmentationSpatial Enhanced Rotation Aware NetMMDDSMDice = 84.3%
IOU = 73.95%
Chen et al. [173]2020Mass SegmentationModified U-NetMMINbreast CBIS-DDSMINbreast: Dice = 81.64%
CBIS: Dice = 82.16%
Soleimani et al. [174]2020Breast-Pectoral SegmentationCNNMMMIAS
MIAS: Dice = 97.59%
CBIS: Dice = 97.69%
INbreast: Dice = 96.39%
Al-antari et al. [175]2020Breast lesions Segmentation/ClassificationYOLOMMDDSM
DDSM: F1-score = 99.28%
INbreast: F1-score = 98.02%
DDSM: Acc = 97.5%
INbreast: Acc = 95.32%
Li et al. [176]2020Mass SegmentationSiamese-Faster-RCNNMMINbreast
INbreast: TP = 0.88,
TP = 0.85
TP = 0.85
Peng et al. [177]2020Mass SegmentationFaster RCNNMMCBIS-DDSM
TP = 0.93
TP = 0.95
Kavitha et al. [178]2021Mass Segmentation/ClassificationCapsNetMMMIAS
MIAS: Acc = 98.5%
Acc = 97.55%
Shoshan et al. [179]2021Lesion detectionCNNDBTDBTex challengeAvg. sensitivity = 0.91
Hossain et al. [180]2022Lesion detectionCNNDBTDBTex challengeAvg. sensitivity = 0.815
Hossain et al. [181]2022Lesion detectionCNNDBTDBTex challengeAvg. sensitivity = 0.84
Atrey et al. [182]2022Breast lesion SegmentationCNNMMDDSMDice = 65%
Table 4. The summary of the studies that used ultrasound dataset.
Table 4. The summary of the studies that used ultrasound dataset.
Byra et al.
2019ClassificationTransfer learning based on VGG-19 and InceptionV3OASBUDVGG19: AUC = 0.822
InceptionV3: AUC = 0.857
Byra et al.
2019ClassificationTransfer learning based on VGG 19PrivateAUC = 0.936
Hijab et al.
2019ClassificationTransfer learning based on VGG16PrivateAcc = 97.4%
AUC = 0.98
Zhang et al.
2019ClassificationDeep Polynomial Network (DPN)PrivateAcc = 95.6%
AUC = 0.961
Fujioka et al.
2020ClassificationCNNPrivateAUC = 0.87
Wu et al.
2020ClassificationRandom Forest (RF)PrivateAcc = 86.97%
Wu et al.
2020ClassificationGeneralized Regression Neural Network (GRNN)PrivateAcc = 87.78%
F1 score = 86.15%
Gong et al.
2020ClassificationMulti-view Deep Neural Network Support Vector Machine (MDNNSVM)PrivateAcc = 86.36%
AUC = 0.908
Moon et al.
2020ClassificationVGGNet + ResNet + DenseNet (Ensemble loss)SNUH
Acc = 91.1%
AUC = 0.9697
Acc = 94.62%
AUC = 0.9711
Zhang et al.
2020ClassificationCNNPrivateAUC = 1
Yousef Kalaf et al.
2021ClassificationModified VGG16PrivateAcc = 93%
F1 score = 94%
Misra et al.
2022ClassificationTransfer learning based on AlexNet and ResNetPrivateAcc = 90%
Vakanski et al.
2020Tumor SegmentationCNNBUSIAcc = 98%
Dice score = 90.5%
Byra et al.
2020Mass SegmentationCNNPrivateAcc = 97%
Dice score = 82.6%
Singh et al.
2020Tumor SegmentationCNNMendeley
Mendeley: Dice = 0.9376
UDIAT: Dice = 86.82%
Han et al.
2020Lesion SegmentationGANPrivateDice = 87.12%
Wang et al.
2021Lesion SegmentationResidual Feedback and
3- Radiopaedia
1-Dice = 86.91%
2-Dice = 81.79%
3-Dice = 87%
Wang et al.
Ultrasoundcases: Dice = 84.71%
BUSI: Dice = 83.76%
STUHospital: Dice = 86.52%
Li et al.
2022Tumor Segmentation + ClassificationDeepLab3PrivateS: Dice = 77.3%
C: Acc = 94.8%
Byra et al.
2022Mass Segmentation + ClassificationY-NetPrivateS: Dice = 64.0%
C: AUC = 0.87
Table 5. Summary of the studies that used MRI datasets.
Table 5. Summary of the studies that used MRI datasets.
Ha et al. [238]2019ClassificationCNNPrivateAcc = 70%
Ha et al. [239]2019ClassificationCNNPrivateAcc = 88%
Fang et al. [240]2019ClassificationCNNPrivateAcc = 70.5%
Zheng et al. [241]2020ClassificationCNNTCIAAcc = 97.2%
Holste et al. [242]2021ClassificationFusion Deep learningPrivateAUC = 0.9
Winkler et al. [243]2021ClassificationCNNPrivateACC = 92.8%
Fujioka et al. [244]2021ClassificationCNNPrivateAUC = 0.89
Liu et al. [245]2022ClassificationWeakly ResNet-101PrivateAUC = 0.92
ACC = 94%
Bie et al. [246]2022ClassificationCNNPrivateACC = 92%
Specificity = 94%
Jing et al. [247]2022ClassificationU-NET and ResNet 34PrivateAUC = 0.81
Wu et al. [248]2022ClassificationCNNPrivateAcc = 87.7%
AUC = 91.2%
Verburg et al. [249]2022ClassificationCNNPrivateAUC = 0.83
Dutta et al. [250]2021Tumor SegmentationMulti-contrast D-R2UNetPrivateF1 score = 95%
Carvalho et al. [251]2021Tumor SegmentationSegNet and UNetQIN Breast DCE-MRIDice = 97.6%
IOU = 95.3%
Wang et al. [252]2021Lesion SegmentationCNNPrivateDice = 76.4%
Nowakowska et al. [253]2022Segmentation of BPE area and non-enhancing tissueCNNPrivateDice = 76%
Khaled et al. [254]2022Lesion segmentation3D U-NetTCGA-BRCADice = 68%
Yue et al. [255]2022SegmentationRes_U-NetPrivateDice = 89%
Rahimpour et al. [256]2022Tumor Segmentation3D U-NetPrivateDice = 78%
Zhu et al. [257]2022Lesion Segmentation/ClassificationV-NetPrivateS:
Dice = 86%
Avg. AUC = 0.84
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Madani, M.; Behzadi, M.M.; Nabavi, S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers 2022, 14, 5334.

AMA Style

Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers. 2022; 14(21):5334.

Chicago/Turabian Style

Madani, Mohammad, Mohammad Mahdi Behzadi, and Sheida Nabavi. 2022. "The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review" Cancers 14, no. 21: 5334.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop