You are currently on the new version of our website. Access the old version .
J. ImagingJournal of Imaging
  • Review
  • Open Access

18 September 2021

A Bottom-Up Review of Image Analysis Methods for Suspicious Region Detection in Mammograms

,
,
and
1
Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India
2
Department of Computing and Informatics, Bournemouth University, Poole, Dorset BH12 5BB, UK
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Advances in IoMT, Deep Learning and Computer Vision for Mammographic Image Analysis

Abstract

Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used to detect breast cancer. Though there is a considerable success with mammography in biomedical imaging, detecting suspicious areas remains a challenge because, due to the manual examination and variations in shape, size, other mass morphological features, mammography accuracy changes with the density of the breast. Furthermore, going through the analysis of many mammograms per day can be a tedious task for radiologists and practitioners. One of the main objectives of biomedical imaging is to provide radiologists and practitioners with tools to help them identify all suspicious regions in a given image. Computer-aided mass detection in mammograms can serve as a second opinion tool to help radiologists avoid running into oversight errors. The scientific community has made much progress in this topic, and several approaches have been proposed along the way. Following a bottom-up narrative, this paper surveys different scientific methodologies and techniques to detect suspicious regions in mammograms spanning from methods based on low-level image features to the most recent novelties in AI-based approaches. Both theoretical and practical grounds are provided across the paper sections to highlight the pros and cons of different methodologies. The paper’s main scope is to let readers embark on a journey through a fully comprehensive description of techniques, strategies and datasets on the topic.

1. Introduction

Breast cancer is one of the most commonly diagnosed diseases amongst women worldwide. It is mainly detected on screening exams or the onset of clinical symptoms. Most breast cancers start in mammary glands [1]. The incidence of breast cancer has increased all over the world, and around one million new cases are reported every year [2]. Medical examinations are the most effective method for diagnosis of this cancer. Radiologists use various imaging modalities, such as mammography, breast MRI, ultrasounds, thermography and histopathology imaging. Visual inspections of images allow clinicians to identify suspicious areas that deserve further and more in-depth analysis. The visual inspection is an operator-dependent and time-consuming task. Over the last few decades, both academics and tech companies have proposed and developed proper computer-aided methods to assist the radiologist in diagnosing. Nowadays, CADe (computer-aided detection) and CADx (computer-aided diagnosis) systems are adopted as second opinion tools by expert clinicians for the detection of suspicious regions or abnormalities [3,4]. Most CADe and CADx tools rely on image analysis, machine learning (ML) and the deep learning (DL) approach.
Malignant and benignant masses are abnormal regions or cells that can be identified in mammograms. Various visual descriptors, such as shape, margin and density, are used to categorise abnormal cells. These descriptors are adopted in BI-RADS (Breast Imaging Reporting and Data System) [5], developed by the American College of Radiology. Shape and margin are adequate and discriminating descriptors for detecting masses [6]. For mammogram patch detection, low-level image features, such as interest keypoints, area, orientation, perimeter, and intensity, are frequently used [7,8]. Lot of work has been done to detect mammogram lesions using low-level image features, such as shapes, texture and local keypoint descriptors, which are discussed in this work.
AI (artificial intelligence) approaches, such as machine learning (ML) and deep learning (DL) gradually replaced these image processing-based techniques (e.g., methods relying on the analysis of low-level image descriptors, such as texture, local keypoints, and boundaries) because of their higher accuracy rates. Machine learning links the problem of learning from input data samples to the universal rules of inference. This approach uses analytical, statistical, and mathematical techniques that allow the machines to infer knowledge from training data without explicit programming. Some machine learning approaches [9,10,11], such as support vector machine (SVM), naïve Bayes, artificial neural network (ANN), and set classifiers [12], have become quite common for the development of computer-aided detection system for breast cancer. Machine learning techniques usually rely first on a step of image features’ extraction. Generally, the image features are described with arrays, namely descriptors, which feed training processes. The opportune choice of features then plays a fundamental role in the overall training accuracy. Historically speaking, there were some challenges motivating deep learning [13] that have represented an evolution in the traditional machine learning paradigm. Deep learning focuses on knowledge inference mechanisms from data and achieves higher levels of generalisation than in conventional machine learning. One of the most influential deep learning networks is the so-called CNN (convolutional neural network), characterised by convolutional layers. Other than traditional machine learning approaches, deep learning techniques are independent of feature extraction steps because of the high number of inner layers that somewhat perform feature extraction on the way through layer-embedded operators. DL-based algorithms are not trained to classify abnormal masses by inputting them with information about their shape, size, pattern, and other features; the algorithm itself learns what the mass looks like [14], using thousands of images during the training process. More details about techniques, architectures and models are provided in the corresponding sections of the paper.
Publicly available and adequately annotated datasets are rare in the medical imaging field; hence, there is a need for methods to deal with a low number of annotated images for training models and reaching a high accuracy rate. In this regard, two main approaches, such as transfer learning and unsupervised deep learning, turn out to be quite helpful. The former faces the lack of hand-labelled data, using pre-existing deep learning architectures and fine-tuning them onto a new application domain with a reduced number of samples [15]. The latter mainly derives direct perceptions from data and uses them for data-driven decision making. These approaches are more robust, meaning that they provide the base for varieties of complex problems, such as compression, classification, denoising, reducing dimensionality, etc. Unsupervised learning is also combined with supervised learning to create a model with added generalisation. Autoencoders and generative adversarial networks are widely adopted unsupervised deep learning approaches, which are discussed in the paper.

1.1. Motivation and Study Criteria

The main objective of this paper is to discuss different techniques in the literature to detect and/or classify suspicious regions spanning from mammograms using low-level image features to machine learning techniques and deep learning approaches. In the attempt to feed the open debate on the topic as mentioned earlier, the paper aims at answering the following questions:
  • Which are various techniques to extract low-level image features from mammograms?
  • What machine learning approaches tackle the detection of a mistrustful region in breast images?
  • What are the various supervised and unsupervised deep learning approaches used for breast image analysis to detect and/or classify a suspicious region from a mammography image?
  • What are the most commonly cited and publicly available mammogram datasets?
The survey also briefly discusses various forms of breast abnormalities—morphological features that are used by radiologists to detect suspicious masses and standard projection views of mammograms. This article further shows commonly cited and publicly available datasets of breast mammograms. The same datasets are compared. Furthermore, this paper mainly presents a comprehensive study of various methods in the scientific literature on the detection of suspicious regions from mammograms. Three main groups of methods are presented in this work: low-level image feature-based approaches, machine learning approaches, and deep learning approaches. The scientific literature is full of techniques that fall within each of these categories. One of the objectives of this paper is to discuss the most used and cited ones in the mammogram analysis domain.
This paper surveys hundreds of articles from indexed and referred journals, conference proceedings and books out of major online scientific databases, including IEEE Xplore, Web of Science, Scopus, and PubMed. Insightful and comprehensive surveys on mammographic image analysis are present in the scientific literature. Sadoughi et al. [16] thoroughly encompassed image processing techniques for detecting breast cancer by mostly focusing on artificial intelligence techniques. This paper aims to offer a bottom-up review, spanning both low-level image analysis and artificial intelligence techniques and providing the reader with all the materials needed to start working on the topic. For a more comparative analysis amongst studies, the paper is provided with relevant information, such as references, techniques used, scopes of work, datasets, and various performance metrics.

1.2. Paper Organization

The overall structure of the paper is as follows. Section 2 provides readers with a description some clinical aspects of breast cancer in terms of mammogram projection views and various forms of breast abnormalities in mammograms. Section 3 provides an up-to-date list and details of mammogram datasets along with their comparisons. A link to the URL of each dataset is also provided. Section 4 reviews the related techniques, focusing on three categories and different approaches. Finally, the paper ends with a discussion Section 5, followed by a conclusion (Section 6). The organisation of the entire paper is depicted in Figure 1.
Figure 1. Organization of paper.

2. Breast Cancer: Clinical Aspects

2.1. Breast Positioning and Projection View

The early detection of breast cancer depends on some crucial factors, such as the quality of the imaging technique and the patient’s position while the mammogram images are being taken. Breast positioning plays such a critical role in the process; improper positioning may result in inconclusive examination and mammogram artefacts. Mediolateral oblique (MLO) and bilateral craniocaudal (CC) represent the standard mammogram views. Both views encompass routine clinical screening mammography as depicted in Figure 2. It is essential to have proper and acceptable head-turning of the patient to obtain the CC view and raising of the arms of patients to obtain the MLO view. A correct CC projection should demonstrate the pectoral muscle on the posterior breast edge, maximum breast tissue and retro mammary space. As described by Moran et al. [17], a proper MLO view should ideally show the axilla, the tail of the axilla, and the inframammary fold along with the breast tissue. For an adequate breast cancer diagnosis, it is crucial to have multi-view mammographic data. Single-view mammograms may not provide enough information for a complete screening (some lesions might be missed). Andersson et al. [18] focused on the influence of the number of projections in mammography on breast disease detection. They reviewed 491 cases of breast cancer and evaluated the diagnostic importance of standard projection views. In their study, they reported that 90% of the malignancies were detected with a single projection view. The percentage of detected malignancies increased to 94% with multi-view projections. Furthermore, the latter reasonably lowers the number of false positives. Nowadays, many publicly available datasets include multi-view images [19].
Figure 2. MLO and CC views of mammogram. Red highlighted sections in the images present abnormalities. Left images shows right MLO and CC views of benign calcification in upper outer quadrant of right breast. Right images shows MLO and CC views of spiculated mass lesion in lower inner quadrant of left breast.

2.2. Various Forms of Breast Abnormalities

Breast abnormalities can assume different shapes and characteristics: mass (lesion), architectural distortion, calcification and asymmetry, as shown in Figure 3. These images are taken from publicly available mammogram datasets. This section briefly overviews these abnormalities and associated features.
Figure 3. Categories of breast abnormalities. (A) Mass—well-defined irregular lesion, suspicious spiculated mass. (B) Architectural distortion. (C) Calcification—discrete microcalcification. (D) Asymmetry.
  • Mass: A mass is a 3D lesion that can be seen in various projections. Morphological features, such as shape, margin and density, are used for mass characterisation. The shape can be round, oval or irregular. The margin can be not well defined, microlobulated, speculated, indistinct or circumscribed. Figure 4 shows the graphical representation of these morphological features (shape and margin) of a mass along with their subcategories. When superimposed breast tissues hide margins, that is called obscured or partially obscured. Microlobulated infers a suspicious finding. Spiculated margin with radiating lines is also a suspicious finding. Indistinct, also termed as ill-defined, is a suspicious finding too. Circumscribed is a well-defined mass that is a benign finding. Density can be high, low or fat-containing. The density of a mass is related to the expected attenuation of an equal volume of a fibroglandular tissue [6,20]. High density is associated with malignancy.
    Figure 4. Taxonomy of breast abnormalities and morphological features in mammograms.
  • Architectural distortion: This abnormality is found when normal architecture is distorted without certain mass visibility. Architectural distortion may include straight thin lines, speculated radiating lines, or focal retraction [6,20]. This abnormality can be seen as an additional feature. If there is a mass with distortion, it is likely to be malignant.
  • Calcification: Calcifications are tiny spots of calcium that develop in the breast tissues. Arrangement of calcifications can be diffuse, regional, cluster, linear or segmental [6,20]. There are two types; macrocalcification and microcalcification. Macrocalcifications are large dots of white colour and often spread randomly within the breast area. Microcalcifications are small deposits of calcium, usually non-cancerous, but if visualised as particular patterns and clustered, they may reveal an early sign of malignancy.
  • Asymmetries: These are the findings that show unilateral deposits of fibroglandular tissues, which cannot confirm the definition of mass. That can be seen in only one projection and is mainly caused by the superimposition of breast tissues that are normal [6,20].
Morphological features play an essential role in diagnosing breast diseases. Several studies evaluated the effectiveness of these features to diagnose the disease and to suggest the malignancy. Gemignani [21] presented a study on breast diseases. Mammographic lesions and microcalcifications are studied in the article. According to this study, masses with spiculated boundaries and irregular shapes have the highest chances of being carcinoma. Carcinoma is a common type of breast cancer. Rasha et al. [22] used morphological descriptors of BI-RADS for the characterisation of breast lesions. The study was carried out on a total of 261 breast lesions that were identified on contrast-enhanced spectral mammography in 239 patients. The authors concluded that morphological descriptors can be applied to characterise lesions. Most suggestive morphological descriptors are irregular-shaped mass lesions with spiculated and irregular margins. Wedegartner et al. [23] presented a study to check the expediency of morphological features to distinguish between malignant and benign masses. The result of the study shows that the irregular shape of the lesion is highly indicative of malignancy. The overall taxonomy of breast abnormalities and morphological features in mammograms are presented in Figure 4.
There is a well-defined tool for risk assessment and quality assurance, developed by the American College of Radiology, called BI-RADS (Breast Imaging-Reporting and Data System) [5]. Descriptors, such as shape and margin (along with their morphological features), are adopted in BI-RADS. Studies of breast imaging are allotted one of seven categories of BI-RADS assessment [24] as shown below:
  • BI-RADS 0 (Assessment Incomplete)—Need further assistance.
  • BI-RADS 1 (Normal)—No evidence of lesion.
  • BI-RADS 2 (Benign)—Non-cancerous lesion (calcified lesion with high density).
  • BI-RADS 3 (Probably benign) —Non-calcified circumscribed mass/obscured mass.
  • BI-RADS 4 (Suspicious abnormality)—Microlubulated mass.
  • BI-RADS 5 (High probability of malignancy)—Indistinct and spiculated mass.
  • BI-RADS 6 (Proven malignancy)—Biopsy-proven malignancy (to check the extent and presence in the opposite breast).
Limitations of BI-RADS: The BI-RADS assessment is subjective. Several studies reported an anatomical variability in interpreting mammograms before the use of the BI-RADS lexicon, and it was not improved with the help of BI-RADS [25]. Beam et al. [26] conducted a study on the mammograms of 79 women, out of which 45 were cancerous. One hundred and eight radiologists reviewed these mammograms. The authors have reported that mammogram reading sensitivity and specificity varied from 47% to 100% and 36% to 99%, respectively. In another study, Berg et al. [27] presented intra- and inter-observer variability amongst five expert radiologists. The assessment of the lesions was highly variable. The readers agreed on only 55% of the total 86 lesions. Finally, Geller et al. [28] presented a study to check whether mammographic assessments and recommendations are appropriately linked or not as per BI-RADS. The study highlighted that BI-RADS 3 category had the highest variability.

3. Mammogram Datasets

This section briefs the publicly available mammography datasets that researchers use to detect and/or classify suspicious regions. Table 1 depicts a summary of the most cited and commonly used datasets. Sample images from these datasets are shown in Figure 3.
Table 1. List of commonly used mammogram datasets and reference URLs.
Table 1. List of commonly used mammogram datasets and reference URLs.
Origin and YearTotal
Cases
Total Images
(Approx)
View
Type
Image
Type
AnnotationReference Link for the Dataset
SureMaPPUK, 2020145343MLODICOMCentre and radious of circle
enclosing the abnormality
https://mega.nz/#F!Ly5g0agB!%E2%80%91QL9uBEvoP8rNig8JBuYfw (accessed on 27 October 2020)
DDSMUSA, 1999262010000MLO, CCLJPEGPixel level boundary around
abnormality
http://www.eng.usf.edu/cvprg/Mammography/Database.html (accessed on 31 May 2021)
CBIS-DDSMUSA, 1999677510239MLO, CCDICOMPixel level boundary around
abnormality
https://wiki.cancerimagingarchive.net/display/Public/CBIS-DDSM (accessed on 31 May 2021)
INBreastPortugal, 2011115422MLO, CCDICOMPixel level boundary around
abnormality
http://medicalresearch.inescporto.pt/breastresearch/GetINbreastDatabase.html (Link is taken from the base paper. Accessed on 31 May 2021)
MIAS 161322MLOPGMCentre and radious of circle
enclosing the abnormality
https://www.repository.cam.ac.uk/handle/1810/250394 (accessed on 31 May 2021)
BCDRPortugal, 201217347315MLO, CCTIFFUnknownhttps://bcdr.eu/information/about (accessed on 31 May 2021)
IRMAGermany, 2008Unknown10509MLO, CCSeveralSeveralhttps://www.spiedigitallibrary.org/conference-proceedings-of-spie/6915/1/Toward-a-standard-reference-database-for-computer-aided-mammography/10.1117/12.770325.short?SSO=1 (accessed on 31 May 2021)
BancoWeb
LAPIMO
Brazil, 20103201473MLO, CCTIFFROI for few imageshttp://lapimo.sel.eesc.usp.br/bancoweb (assessed on 31 May 2021)

3.1. SureMaPP

SureMaPP [29] is a recently published dataset of mammograms with around 343 images manually annotated by experts in the field. Two different devices capture this dataset’s images: GIOTTO IMAGE SDL/W and FUJIFILM FCR PROFECT CS. Mammograms are available with two different spatial resolutions: 3584 × 2816 pixels and 5928 × 4728.

3.2. DDSM

The digital database of screening mammography (DDSM) [30] is a very old mammogram dataset. It consists of 2620 mammography studies from hospitals and medical universities in the U.S. Each case includes standard views, such as the mediolateral oblique (MLO) view and craniocaudal (CC) view for the left and right breast.

3.3. CBIS-DDSM

Curated Breast Imaging Subset of DDSM (CBIS-DDSM) [31] is a modified and standardised version of DDSM. Images of CBIS-DDSM are uncompressed and converted into DICOM format. This dataset includes an updated region of interest (ROI) segmentation and bounding box. Other pathological details, such as type of mass, grade of tumour and cancer stage, are included in the dataset.

3.4. INBreast

INBreast [32] has a total of 410 images acquired at the Breast Centre in CHSJ, Porto. As for CBIS-DDSM, DICOM format images with both MLO and CC views are provided. All images are annotated and validated by expert clinicians. Currently, Universidade do Porto has stopped supporting the dataset, but researchers may have access to the dataset by requesting the same.

3.5. MIAS

The Mammographic Image Analysis Society (MIAS) [33] dataset consists of 322 screening mammograms. Annotations are available in a separate file containing the background tissue type, class and severity of the abnormality, x and y coordinates of the centre of irregularities, and the approximate radius of a circle enclosing the abnormal region in pixels.

3.6. BCDR

The Breast Cancer Digital Repository (BCDR) [34] is a public mammogram dataset containing 1734 patient cases. These cases are classified as per Breast Imaging-Reporting and Data System (BIRADS). BCDR comprises two repositories: Film Mammography-Based Repository (BCDR-FM) and Full Field Digital Mammography-Based Repository (BCDR-DM). BCDR-FM contains 1010 patient cases with both MLO and CC views. BCDR-DM is still under construction. The BCDR dataset can be accessed by registering on the dataset website.

3.7. IRMA

The IRMA [35] dataset was developed from the union of various other datasets, such as DDSM, MIAS, the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Images of this dataset are also available with both views. The dataset contains all types of lesions. IRMA is enriched with ROI annotations, which make the dataset suitable for supervised deep learning approaches.

3.8. BancoWeb LAPIMO

The BancoWeb LAPIMO [36] dataset is equipped with a total of 320 cases and 1473 images with MLO and CC views. Pictures of the dataset are grouped into the following categories: normal, benign and malignant. Annotations and patients’ background information are provided with BI-RADS. Annotations in the form of ROI are available for just a few images, while a textual description of the findings is available for all images. BI-RADS mammograms are in TIFF format.

5. Discussion

This study surveys several scientific articles on suspicious regions detection in mammograms using a bottom-up approach, spanning low-level image feature-based techniques to deep learning techniques. One of the main points of this work is to analyse different approaches under three central perspectives: feature extraction, architectures used, and datasets employed to carry out experiments to detect and/or classify suspicious regions in mammograms.

Final Points

  • This paper surveys methods and techniques tackling the detection of suspicious regions in mammograms. The narrative of this work is bottom-up, spanning low-level image feature-based approaches to deep learning architectures. The paper provides summaries of different approaches in tables. In Table 2, Table 3, Table 4 and Table 5, a thorough description of features, performed tasks, datasets, performances is given for the aforementioned methods. Most approaches tackle mass detection and classification, while others address mammogram enhancement, microcalcification detection, and mammogram image generation with unsupervised deep learning architectures. Missing rates on datasets do not allow comparing some methods’ performances. Both MIAS and DDSM datasets stand out in the tables because their employment is far higher than others.
  • Machine learning methods are reliable on most datasets. A method based on textural and shape features and K-means [45] achieves sensitivity rates higher than 94% on both datasets; a technique [44] relying on local contour features, 1D signature contour subsection and SVM shows an accuracy rate of 99.6% on a subset of DDSM. Elmoufidi et al. [50] obtained 96% of accuracy on MIAS using a swarm optimisation algorithm for heuristic parameter selection. The method in [40] adopts morphological features for mass detection in mammograms and achieves 92% of sensitivity, but no performance metrics are given about false positives. Geostatistical and concave geometry (alpha shapes) features [52] allow achieving high detection rates on MIAS (97.30%) and DDSM (91.63%). An LBP (local binary pattern) based method [58] turns out to be quite reliable for mass classification in MIAS (99.65% sensitivity and 99.24% specificity). A morphological top-hat transform method [61] is successful in mass and microcalcification detection on MIAS with around 99% specificity and sensitivity rates (Table 2). As highlighted in the pros and cons sections, when low-level image feature descriptors feed into deep neural networks, as in the method by Utomo et al. [70], they can achieve remarkably well (100% specificity and sensitivity rates) on MIAS. The same is true for methods relying on BoF (Bag of Features) and SVM, meaning they are discriminative features for mass classification in mammograms (DDSM). Accuracy rates are achieved by Deshmuk and Bhosle [75] on MIAS (92.3% accuracy) and DDSM (96.8% accuracy) by using an optimised SURF descriptor.
  • As listed in Table 4, machine learning methods show some remarkable differences with methods in Table 2 and Table 3. Clustering-based methods by Kamil et al. [101] and Ketabi et al. [102] cannot achieve accuracy rates higher than 94% on MIAS and 90% on DDSM. Sharma et al. [113] achieved high performances in mass detection and classification on IRMA (specificity 99% and sensitivity 99%) and DDSM (specificity 96% and sensitivity 97%) using SVM. The ANN method proposed by Mahersia et al. [98] achieved an average mass recognition rates of 97.08% on MIAS.
  • Deep learning methods (Table 5) raise the bar, exploiting their inference knowledge capabilities on more than a single dataset. The autoencoder-based method by Taghanaki et al. [140] performed mammography classification with 98.45% accuracy on INBreast and IRMA. The methods of Selvathi et al. [138,139] scored around 99% accuracy on MIAS by leveraging stacked autencoders, and sparse autoencoder plus random forest.
  • Bruno et al. [29] highlighted how convolutional neural networks’ performance could be affected with noise and bias embedded with training dataset images. The availability of larger sized datasets might fully unleash the inference knowledge capabilities of deep learning architectures. Furthermore, it would enable a training-from-scratch process for neural networks. Further comparisons could be then carried out with pre-existing DL models that are fine-tuned over a limited sized mammogram dataset using transfer learning. It is necessary to highlight that most deep learning methods in the biomedical imaging field currently adopt the above-mentioned pipeline laying on data augmentation plus transfer learning, due to the lack of publicly available and manually annotated datasets.
  • The good performances in mammogram synthesis obtained by Becker et al. [133] and Wu et al. [130,131] open new perspectives to the generation of larger mammogram datasets.

6. Conclusions

Image processing and artificial intelligence have progressed and expanded significantly in the medical field, especially diagnostic imaging. These advancements have greatly influenced computer-aided diagnosis (CAD) systems to detect and/or classify suspicious regions from mammograms. This study wants to represent a comprehensive insight into various approaches based on low-level image features, machine learning, and deep learning by comparing them on publicly available datasets. The performance of these approaches guides researchers in this domain to select an appropriate method for their applications. Computational models based on these approaches generally represent the core of CAD (computer-aided diagnosis) systems, suggesting regions of interests and leaving last words to medical doctors and practitioners. In this section, concise replies to the questions raised at the beginning of the paper are provided and described as follows:
(1)
Shape-based, texture-based and local keypoint descriptors are the most common techniques used to extract low-level image features from mammograms;
(2)
Machine learning approaches such as SVM, ANN, and various clustering techniques are also quite successful over various medical imaging tasks, especially to detect/classify abnormality from mammograms;
(3)
Both supervised and unsupervised DL approaches have proven to be best for various mammogram analysis tasks;
(4)
As listed in Table 1, researchers in the community of biomedical imaging ran experiments on different publicly available and commonly cited datasets such as SureMaPP, DDSM, INBreast, BCDR, IRMA, BancoWeb LAPIMO etc. Each dataset features images with several properties, due to different acquiring device properties.
Much work has already been done for computer-aided breast cancer detection, out of which few studies are already implemented and transformed into commercial products. Due to the lack of big sized publicly available datasets with manual annotations, the current deep learning architectures cannot fully unleash their inference knowledge capabilities for other tasks, such as object detection, classification and segmentation. Unsupervised learning techniques, such as GANs and autoencoders, appear to be promising solutions to fill the dimensionality gap between biomedical imaging and other common computer vision topics.

Author Contributions

Conceptualisation, P.O., P.S., S.P. and A.B.; investigation, P.O. and A.B.; writing—original draft preparation, P.O.; writing—review and editing, P.O. and A.B.; supervision, P.S., S.P. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors want to express their gratitude to Rajiv Oza (Consultant Radiologist) for advising them on radiological imaging techniques from a medical perspective.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CADComputer-Aided Diagnosis
BI-RADSBreast Imaging Reporting and Database System
AIArtificial Intelligence
MLMachine Learning
DLDeep Learning
SVMSupport Vector Machine
ANNArtificial Neural Network
CNNConvolutional Neural Network
SIFTScale Invariant Feature Transform
SURFSpeed Up Robust Feature
FCNFully Convolutional Network
RCNNNRegion-Based Convolutional Neural Network
GANGenerative Adversarial Network
MLOMediolateral Oblique
CCCraniocaudal
ROIRegion of Interest
kNNk-Nearest Neighbour
MCMicrocalcification
MCLMultiple Concentric Layers
MREMean Squared Reconstruction Error
MSEMean Squared Error
GLCMGray-Level Co-occurrence Matrix
GLRLMGray-Level Run-Length Matrix
LBPLocal Binary Patterns
LQPLocal Quinary Patterns
CLAHEContrast Limited Adaptive Histogram Equalization
BRIEFBinary Robust Independent Elementary Features
SOMSelf Organising Maps
GAGenetic Algorithms
PFCMPossibilistic Fuzzy C-Means
MIASMammographic Image Analysis Society
DDSMDigital Database of Screening Mammography
CBIS-DDSMCurated Breast Imaging Subset-DDSM
BCDRBreast Cancer Digital Repository

References

  1. Society, A.C. Breast cancer facts & figures 2019–2020. Am. Cancer Soc. 2019, 1–44. [Google Scholar]
  2. Hamidinekoo, A.; Denton, E.; Rampun, A.; Honnor, K.; Zwiggelaar, R. Deep learning in mammography and breast histology, an overview and future trends. Med. Image Anal. 2018, 47, 45–67. [Google Scholar] [CrossRef] [Green Version]
  3. Yassin, N.I.; Omran, S.; El Houby, E.M.; Allam, H. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. Comput. Methods Programs Biomed. 2018, 156, 25–45. [Google Scholar] [CrossRef]
  4. Comelli, A.; Bruno, A.; Di Vittorio, M.L.; Ienzi, F.; Lagalla, R.; Vitabile, S.; Ardizzone, E. Automatic multi-seed detection for MR breast image segmentation. In International Conference on Image Analysis and Processing; Springer: Cham, Switzerland, 2017; pp. 706–717. [Google Scholar]
  5. Sickles, E.; d’Orsi, C.; Bassett, L.; Appleton, C.; Berg, W.; Burnside, E.; Feig, S.; Gavenonis, S.; Newell, M.; Trinh, M. Acr bi-rads® mammography. ACR BI-RADS® Atlas Breast Imaging Report. Data Syst. 2013, 5, 2013. [Google Scholar]
  6. Surendiran, B.; Vadivel, A. Mammogram mass classification using various geometric shape and margin features for early detection of breast cancer. Int. J. Med. Eng. Inform. 2012, 4, 36–54. [Google Scholar] [CrossRef]
  7. Ardizzone, E.; Bruno, A.; Mazzola, G. Scale detection via keypoint density maps in regular or near-regular textures. Pattern Recognit. Lett. 2013, 34, 2071–2078. [Google Scholar] [CrossRef] [Green Version]
  8. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  9. Pillai, R.; Oza, P.; Sharma, P. Review of machine learning techniques in health care. In Proceedings of the ICRIC 2019, Jammu, India, 8–9 March 2019; Springer: Cham, Switzerland, 2020; pp. 103–111. [Google Scholar]
  10. Oza, P.; Sharma, P.; Patel, S. Machine Learning Applications for Computer-Aided Medical Diagnostics. In Proceedings of the Second International Conference on Computing, Communications, and Cyber-Security, Ghaziabad, India, 3–4 October; Springer: Singapore, 2021; pp. 377–392. [Google Scholar]
  11. Oza, P.; Shah, Y.; Vegda, M. A Comprehensive Study of Mammogram Classification Techniques. In Tracking and Preventing Diseases with Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2021; pp. 217–238. [Google Scholar]
  12. Saxena, S.; Gyanchandani, M. Machine learning methods for computer-aided breast cancer diagnosis using histopathology: A narrative review. J. Med. Imaging Radiat. Sci. 2020, 51, 182–193. [Google Scholar] [CrossRef]
  13. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  14. Sechopoulos, I.; Teuwen, J.; Mann, R. Artificial intelligence for breast cancer detection in mammography and digital breast tomosynthesis: State of the art. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  15. Baldi, P. Autoencoders, unsupervised learning, and deep architectures. In Proceedings of the ICML Workshop on Unsupervised and Transfer Learning, Bellevue, WA, USA, 2 July 2011; pp. 37–49. [Google Scholar]
  16. Sadoughi, F.; Kazemy, Z.; Hamedan, F.; Owji, L.; Rahmanikatigari, M.; Azadboni, T.T. Artificial intelligence methods for the diagnosis of breast cancer by image processing: A review. Breast Cancer Targets Ther. 2018, 10, 219. [Google Scholar] [CrossRef] [Green Version]
  17. Moran, M.B.; Conci, A.; de JF Rêgo, S.; Fontes, C.A.; Faria, M.D.B.; Bastos, L.F.; Giraldi, G.A. On Using Image Processing Techniques for Evaluation of Mammography Acquisition Errors. In Anais do XIX Simpósio Brasileiro de Computação Aplicada à Saúde; SBC: Porto Alegre, Brazil, 2019; pp. 330–335. [Google Scholar]
  18. Andersson, I.; Hildell, J.; Muhlow, A.; Pettersson, H. Number of projections in mammography: Influence on detection of breast disease. Am. J. Roentgenol. 1978, 130, 349–351. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Popli, M.B.; Teotia, R.; Narang, M.; Krishna, H. Breast positioning during mammography: Mistakes to be avoided. Breast Cancer Basic Clin. Res. 2014, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Mammography—Breast Imaging Lexicon. Available online: https://radiologyassistant.nl/breast/bi-rads/bi-rads-for-mammography-and-ultrasound-2013#mammography-breast-imaging-lexicon (accessed on 30 September 2010).
  21. Gemignani, M.L. Breast diseases. Clin. Gynecol. Oncol. 2012, 369–403. [Google Scholar]
  22. Kamal, R.M.; Helal, M.H.; Mansour, S.M.; Haggag, M.A.; Nada, O.M.; Farahat, I.G.; Alieldin, N.H. Can we apply the MRI BI-RADS lexicon morphology descriptors on contrast-enhanced spectral mammography? Br. J. Radiol. 2016, 89, 20160157. [Google Scholar] [CrossRef] [Green Version]
  23. Wedegärtner, U.; Bick, U.; Wörtler, K.; Rummeny, E.; Bongartz, G. Differentiation between benign and malignant findings on MR-mammography: Usefulness of morphological criteria. Eur. Radiol. 2001, 11, 1645–1650. [Google Scholar] [CrossRef]
  24. Breast Imaging-Reporting and Data System (BI-RADS). Available online: https://radiopaedia.org/articles/breast-imaging-reporting-and-data-system-bi-rads (accessed on 20 July 2021).
  25. Obenauer, S.; Hermann, K.; Grabbe, E. Applications and literature review of the BI-RADS classification. Eur. Radiol. 2005, 15, 1027–1036. [Google Scholar] [CrossRef]
  26. Beam, C.A.; Layde, P.M.; Sullivan, D.C. Variability in the interpretation of screening mammograms by US radiologists: Findings from a national sample. Arch. Intern. Med. 1996, 156, 209–213. [Google Scholar] [CrossRef]
  27. Berg, W.A.; Campassi, C.; Langenberg, P.; Sexton, M.J. Breast Imaging Reporting and Data System: Inter-and intraobserver variability in feature analysis and final assessment. Am. J. Roentgenol. 2000, 174, 1769–1777. [Google Scholar] [CrossRef] [Green Version]
  28. Geller, B.M.; Barlow, W.E.; Ballard-Barbash, R.; Ernster, V.L.; Yankaskas, B.C.; Sickles, E.A.; Carney, P.A.; Dignan, M.B.; Rosenberg, R.D.; Urban, N.; et al. Use of the American College of Radiology BI-RADS to report on the mammographic evaluation of women with signs and symptoms of breast disease. Radiology 2002, 222, 536–542. [Google Scholar] [CrossRef] [Green Version]
  29. Bruno, A.; Ardizzone, E.; Vitabile, S.; Midiri, M. A novel solution based on scale invariant feature transform descriptors and deep learning for the detection of suspicious regions in mammogram images. J. Med. Signals Sens. 2020, 10, 158. [Google Scholar]
  30. Heath, M.; Bowyer, K.; Kopans, D.; Kegelmeyer, P.; Moore, R.; Chang, K.; Munishkumaran, S. Current status of the digital database for screening mammography. In Digital Mammography; Springer: Dordrecht, The Netherlands, 1998; pp. 457–460. [Google Scholar]
  31. Lee, R.S.; Gimenez, F.; Hoogi, A.; Miyake, K.K.; Gorovoy, M.; Rubin, D.L. A curated mammography data set for use in computer-aided detection and diagnosis research. Sci. Data 2017, 4, 1–9. [Google Scholar] [CrossRef]
  32. Moreira, I.C.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M.J.; Cardoso, J.S. Inbreast: Toward a full-field digital mammographic database. Acad. Radiol. 2012, 19, 236–248. [Google Scholar] [CrossRef] [Green Version]
  33. Suckling, J.; Parker, J.; Dance, D.; Astley, S.; Hutt, I.; Boggis, C.; Ricketts, I.; Stamatakis, E.; Cerneaz, N.; Kok, S.; et al. Mammographic Image Analysis Society (mias) Database v1. 21. Available online: https://www.repository.cam.ac.uk/handle/1810/250394 (accessed on 18 September 2021).
  34. Lopez, M.; Posada, N.; Moura, D.C.; Pollán, R.R.; Valiente, J.M.F.; Ortega, C.S.; Solar, M.; Diaz-Herrero, G.; Ramos, I.; Loureiro, J.; et al. BCDR: A breast cancer digital repository. In Proceedings of the 15th International Conference on Experimental Mechanics, Porto, Portugal, 22 July 2012; Volume 1215. [Google Scholar]
  35. Oliveira, J.E.; Gueld, M.O.; Araújo, A.d.A.; Ott, B.; Deserno, T.M. Toward a standard reference database for computer-aided mammography. In Medical Imaging 2008: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2008; Volume 6915, p. 69151Y. [Google Scholar]
  36. Matheus, B.R.N.; Schiabel, H. Online mammographic images database for development and comparison of CAD schemes. J. Digit. Imaging 2011, 24, 500–506. [Google Scholar] [CrossRef] [Green Version]
  37. Nemoto, M.; Masutani, Y.; Nomura, Y.; Hanaoka, S.; Miki, S.; Yoshikawa, T.; Hayashi, N.; Ootomo, K. Machine Learning for Computer-aided Diagnosis. Igaku Butsuri Nihon Igaku Butsuri Gakkai Kikanshi Jpn. J. Med. Phys. Off. J. Jpn. Soc. Med. Phys. 2016, 36, 29–34. [Google Scholar]
  38. Sampat, M.P.; Markey, M.K.; Bovik, A.C. Computer-aided detection and diagnosis in mammography. Handb. Image Video Process. 2005, 2, 1195–1217. [Google Scholar]
  39. Raguso, G.; Ancona, A.; Chieppa, L.; L’Abbate, S.; Pepe, M.L.; Mangieri, F.; De Palo, M.; Rangayyan, R.M. Application of fractal analysis to mammography. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 3182–3185. [Google Scholar] [CrossRef]
  40. Eltonsy, N.H.; Tourassi, G.D.; Elmaghraby, A.S. A Concentric Morphology Model for the Detection of Masses in Mammography. IEEE Trans. Med. Imaging 2007, 26, 880–889. [Google Scholar] [CrossRef] [PubMed]
  41. Rangayyan, R.M.; Mudigonda, N.R.; Desautels, J.L. Boundary modelling and shape analysis methods for classification of mammographic masses. Med. Biol. Eng. Comput. 2000, 38, 487–496. [Google Scholar] [CrossRef]
  42. Chakraborty, J.; Mukhopadhyay, S.; Singla, V.; Khandelwal, N.; Bhattacharyya, P. Automatic detection of pectoral muscle using average gradient and shape based feature. J. Digit. Imaging 2012, 25, 387–399. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Mustra, M.; Bozek, J.; Grgic, M. Nipple detection in craniocaudal digital mammograms. In Proceedings of the 2009 International Symposium ELMAR, Zadar, Croatia, 28–30 September 2009; pp. 15–18. [Google Scholar]
  44. Li, H.; Meng, X.; Wang, T.; Tang, Y.; Yin, Y. Breast masses in mammography classification with local contour features. Biomed. Eng. Online 2017, 16, 1–12. [Google Scholar] [CrossRef] [Green Version]
  45. Elmoufidi, A.; El Fahssi, K.; Jai-Andaloussi, S.; Sekkaki, A.; Gwenole, Q.; Lamard, M. Anomaly classification in digital mammography based on multiple-instance learning. IET Image Process. 2017, 12, 320–328. [Google Scholar] [CrossRef]
  46. Zhang, L.; Qian, W.; Sankar, R.; Song, D.; Clark, R. A new false positive reduction method for MCCs detection in digital mammography. In Proceedings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, Proceedings (Cat. No.01CH37221), Salt Lake City, UT, USA, 7–11 May 2001; Volume 2, pp. 1033–1036. [Google Scholar] [CrossRef]
  47. Soltanian-Zadeh, H.; Rafiee-Rad, F.; Pourabdollah-Nejad, D.S. Comparison of multiwavelet, wavelet, Haralick, and shape features for microcalcification classification in mammograms. Pattern Recognit. 2004, 37, 1973–1986. [Google Scholar] [CrossRef]
  48. Felipe, J.C.; Ribeiro, M.X.; Sousa, E.P.; Traina, A.J.; Traina, C.J. Effective shape-based retrieval and classification of mammograms. In Proceedings of the 2006 ACM Symposium on Applied Computing, Dijon, France, 23–27 April 2006; pp. 250–255. [Google Scholar]
  49. Soltanian-Zadeh, H.; Pourabdollah-Nezhad, S.; Rad, F.R. Shape-based and texture-based feature extraction for classification of microcalcifications in mammograms. In Medical Imaging 2001: Image Processing; International Society for Optics and Photonics: San Diego, CA, USA, 2001; Volume 4322, pp. 301–310. [Google Scholar]
  50. Zyout, I.; Abdel-Qader, I.; Jacobs, C. Embedded feature selection using PSO-kNN: Shape-based diagnosis of microcalcification clusters in mammography. J. Ubiquitous Syst. Pervasive Netw. 2011, 3, 7–11. [Google Scholar] [CrossRef]
  51. Sahiner, B.; Chan, H.P.; Petrick, N.; Helvie, M.A.; Hadjiiski, L.M. Improvement of mammographic mass characterization using spiculation measures and morphological features. Med. Phys. 2001, 28, 1455–1465. [Google Scholar] [CrossRef]
  52. Junior, G.B.; da Rocha, S.V.; de Almeida, J.D.; de Paiva, A.C.; Silva, A.C.; Gattass, M. Breast cancer detection in mammography using spatial diversity, geostatistics, and concave geometry. Multimed. Tools Appl. 2019, 78, 13005–13031. [Google Scholar] [CrossRef]
  53. Ramos, R.P.; do Nascimento, M.Z.; Pereira, D.C. Texture extraction: An evaluation of ridgelet, wavelet and co-occurrence based methods applied to mammograms. Expert Syst. Appl. 2012, 39, 11036–11047. [Google Scholar] [CrossRef]
  54. Haindl, M.; Remeš, V. Pseudocolor enhancement of mammogram texture abnormalities. Mach. Vis. Appl. 2019, 30, 785–794. [Google Scholar] [CrossRef]
  55. Zheng, Y.; Keller, B.M.; Ray, S.; Wang, Y.; Conant, E.F.; Gee, J.C.; Kontos, D. Parenchymal texture analysis in digital mammography: A fully automated pipeline for breast cancer risk assessment. Med. Phys. 2015, 42, 4149–4160. [Google Scholar] [CrossRef] [Green Version]
  56. Tai, S.C.; Chen, Z.S.; Tsai, W.T. An automatic mass detection system in mammograms based on complex texture features. IEEE J. Biomed. Health Inform. 2013, 18, 618–627. [Google Scholar]
  57. Mudigonda, N.R.; Rangayyan, R.M.; Desautels, J.L. Detection of breast masses in mammograms by density slicing and texture flow-field analysis. IEEE Trans. Med. Imaging 2001, 20, 1215–1227. [Google Scholar] [CrossRef] [PubMed]
  58. Farhan, A.H.; Kamil, M.Y. Texture Analysis of Mammogram Using Local Binary Pattern Method. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020; Volume 1530, p. 012091. [Google Scholar]
  59. Mohanty, A.K.; Senapati, M.R.; Beberta, S.; Lenka, S.K. Texture-based features for classification of mammograms using decision tree. Neural Comput. Appl. 2013, 23, 1011–1017. [Google Scholar] [CrossRef]
  60. Li, H.; Mukundan, R.; Boyd, S. Robust Texture Features for Breast Density Classification in Mammograms. In Proceedings of the 2020 16th IEEE International Conference on Control, Automation, Robotics and Vision (ICARCV), Shenzhen, China, 13–15 December 2020; pp. 454–459. [Google Scholar]
  61. Quintanilla-Domínguez, J.; Barrón-Adame, J.M.; Gordillo-Sosa, J.A.; Lozano-Garcia, J.M.; Estrada-García, H.; Guzmán-Cabrera, R. Analysis of Mammograms Using Texture Segmentation. Adv. Lang. Knowl. Eng. 2016, 119. [Google Scholar] [CrossRef]
  62. Hung, C.L.; Lin, C.Y. GPU-Based Texture Analysis approach for Mammograms Institute of Biomedical Informatics. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Korea, 16–19 December 2020; pp. 2183–2186. [Google Scholar]
  63. Biswas, S.K.; Mukherjee, D.P. Recognizing architectural distortion in mammogram: A multiscale texture modeling approach with GMM. IEEE Trans. Biomed. Eng. 2011, 58, 2023–2030. [Google Scholar] [CrossRef] [PubMed]
  64. Lowe, D. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar] [CrossRef]
  65. Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Computer Vision—ECCV 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar] [CrossRef]
  66. Li, J.; Allinson, N.M. A comprehensive review of current local features for computer vision. Neurocomputing 2008, 71, 1771–1787. [Google Scholar] [CrossRef]
  67. Jiang, M.; Zhang, S.; Li, H.; Metaxas, D.N. Computer-Aided Diagnosis of Mammographic Masses Using Scalable Image Retrieval. IEEE Trans. Biomed. Eng. 2015, 62, 783–792. [Google Scholar] [CrossRef]
  68. Guan, Q.; Zhang, J.; Chen, S.; Todd-Pokropek, A. Automatic segmentation of micro-calcification based on sift in mammograms. In Proceedings of the 2008 IEEE International Conference on BioMedical Engineering and Informatics, Sanya, China, 27–30 May 2008; Volume 2, pp. 13–17. [Google Scholar]
  69. Insalaco, M.; Bruno, A.; Farruggia, A.; Vitabile, S.; Ardizzone, E. An Unsupervised Method for Suspicious Regions Detection in Mammogram Images. In ICPRAM (2); SCITEPRESS Digital Library: Setúbal, Portugal, 2015; pp. 302–308. [Google Scholar]
  70. Utomo, A.; Juniawan, E.F.; Lioe, V.; Santika, D.D. Local Features Based Deep Learning for Mammographic Image Classification: In Comparison to CNN Models. Procedia Comput. Sci. 2021, 179, 169–176. [Google Scholar] [CrossRef]
  71. Salazar-Licea, L.A.; Mendoza, C.; Aceves, M.A.; Pedraza, J.C.; Pastrana-Palma, A. Automatic segmentation of mammograms using a Scale-Invariant Feature Transform and K-means clustering algorithm. In Proceedings of the 2014 11th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Campeche, Mexico, 29 September–3 October 2014; pp. 1–6. [Google Scholar] [CrossRef]
  72. Bosch, A.; Munoz, X.; Oliver, A.; Marti, J. Modeling and Classifying Breast Tissue Density in Mammograms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 2, pp. 1552–1558. [Google Scholar] [CrossRef] [Green Version]
  73. Liasis, G.; Pattichis, C.; Petroudi, S. Combination of different texture features for mammographic breast density classification. In Proceedings of the 2012 IEEE 12th International Conference on Bioinformatics Bioengineering (BIBE), Larnaca, Cyprus, 11–13 November 2012; pp. 732–737. [Google Scholar] [CrossRef]
  74. Matos, C.E.F.; Souza, J.C.; Diniz, J.O.B.; Junior, G.B.; de Paiva, A.C.; de Almeida, J.D.S.; da Rocha, S.V.; Silva, A.C. Diagnosis of breast tissue in mammography images based local feature descriptors. Multimed. Tools Appl. 2019, 78, 12961–12986. [Google Scholar] [CrossRef]
  75. Deshmukh, J.; Bhosle, U. SURF features based classifiers for mammogram classification. In Proceedings of the 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), Chennai, India, 22–24 March 2017; pp. 134–139. [Google Scholar] [CrossRef]
  76. Abudawood, T.; Al-Qunaieer, F.; Alrshoud, S. An Efficient Abnormality Classification for Mammogram Images. In Proceedings of the 2018 21st Saudi Computer Society National Computer Conference (NCC), Riyadh, Saudi Arabia, 25–26 April 2018; pp. 1–6. [Google Scholar] [CrossRef]
  77. Chandakkar, P.; Ragav, V.; Li, B. Feature Extraction and Learning for Visual Data. In Feature Engineering for Machine Learning and Data Analytics; Dong, G., Liu, H., Eds.; CRC Press: Oxford, UK, 2018; Chapter 3; pp. 55–79. [Google Scholar]
  78. Moura, D.C.; López, M.A.G. An evaluation of image descriptors combined with clinical data for breast cancer diagnosis. Int. J. Comput. Assist. Radiol. Surg. 2013, 8, 561–574. [Google Scholar] [CrossRef]
  79. Pérez, N.P.; López, M.A.G.; Silva, A.; Ramos, I. Improving the Mann–Whitney statistical test for feature selection: An approach in breast cancer diagnosis on mammography. Artif. Intell. Med. 2015, 63, 19–31. [Google Scholar] [CrossRef]
  80. Arefan, D.; Mohamed, A.A.; Berg, W.A.; Zuley, M.L.; Sumkin, J.H.; Wu, S. Deep learning modeling using normal mammograms for predicting breast cancer risk. Med. Phys. 2020, 47, 110–118. [Google Scholar] [CrossRef] [Green Version]
  81. Roth, H.R.; Lu, L.; Liu, J.; Yao, J.; Seff, A.; Cherry, K.; Kim, L.; Summers, R.M. Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE Trans. Med. Imaging 2015, 35, 1170–1181. [Google Scholar] [CrossRef] [Green Version]
  82. Dou, Q.; Chen, H.; Yu, L.; Zhao, L.; Qin, J.; Wang, D.; Mok, V.C.; Shi, L.; Heng, P.A. Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans. Med. Imaging 2016, 35, 1182–1195. [Google Scholar] [CrossRef]
  83. Sirinukunwattana, K.; Raza, S.E.A.; Tsang, Y.W.; Snead, D.R.; Cree, I.A.; Rajpoot, N.M. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1196–1206. [Google Scholar] [CrossRef] [Green Version]
  84. Dhungel, N.; Carneiro, G.; Bradley, A.P. The automated learning of deep features for breast mass classification from mammograms. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2016; pp. 106–114. [Google Scholar]
  85. Ridhi, A.; Rai, P.K.; Balasubramanian, R. Deep feature–based automatic classification of mammograms. Med. Biol. Eng. Comput. 2020, 58, 1199–1211. [Google Scholar]
  86. Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual lifelong learning with neural networks: A review. Neural Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef]
  87. Houssein, E.H.; Emam, M.M.; Ali, A.A.; Suganthan, P.N. Deep and machine learning techniques for medical imaging-based breast cancer: A comprehensive review. Expert Syst. Appl. 2020, 114161. [Google Scholar] [CrossRef]
  88. Mehdy, M.; Ng, P.; Shair, E.; Saleh, N.; Gomes, C. Artificial neural networks in image processing for early detection of breast cancer. Comput. Math. Methods Med. 2017, 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Wu, Y.; Giger, M.L.; Doi, K.; Vyborny, C.J.; Schmidt, R.A.; Metz, C.E. Artificial neural networks in mammography: Application to decision making in the diagnosis of breast cancer. Radiology 1993, 187, 81–87. [Google Scholar] [CrossRef] [Green Version]
  90. Fogel, D.B.; Wasson, E.C., III; Boughton, E.M.; Porto, V.W. Evolving artificial neural networks for screening features from mammograms. Artif. Intell. Med. 1998, 14, 317–326. [Google Scholar] [CrossRef]
  91. Halkiotis, S.; Botsis, T.; Rangoussi, M. Automatic detection of clustered microcalcifications in digital mammograms using mathematical morphology and neural networks. Signal Process. 2007, 87, 1559–1568. [Google Scholar] [CrossRef]
  92. Ayer, T.; Chen, Q.; Burnside, E.S. Artificial neural networks in mammography interpretation and diagnostic decision making. Comput. Math. Methods Med. 2013, 2013. [Google Scholar] [CrossRef] [Green Version]
  93. Quintanilla-Domínguez, J.; Cortina-Januchs, M.; Jevtić, A.; Andina, D.; Barrón-Adame, J.; Vega-Corona, A. Combination of nonlinear filters and ANN for detection of microcalcifications in digitized mammography. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, 11–14 October 2009; pp. 1516–1520. [Google Scholar]
  94. Papadopoulos, A.; Fotiadis, D.I.; Likas, A. Characterization of clustered microcalcifications in digitized mammograms using neural networks and support vector machines. Artif. Intell. Med. 2005, 34, 141–150. [Google Scholar] [CrossRef]
  95. García-Manso, A.; García-Orellana, C.J.; González-Velasco, H.; Gallardo-Caballero, R.; Macías, M.M. Consistent performance measurement of a system to detect masses in mammograms based on blind feature extraction. Biomed. Eng. Online 2013, 12, 1–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Hupse, R.; Samulski, M.; Lobbes, M.; Den Heeten, A.; Imhof-Tas, M.W.; Beijerinck, D.; Pijnappel, R.; Boetes, C.; Karssemeijer, N. Standalone computer-aided detection compared to radiologists’ performance for the detection of mammographic masses. Eur. Radiol. 2013, 23, 93–100. [Google Scholar] [CrossRef] [PubMed]
  97. Tan, M.; Qian, W.; Pu, J.; Liu, H.; Zheng, B. A new approach to develop computer-aided detection schemes of digital mammograms. Phys. Med. Biol. 2015, 60, 4413. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  98. Mahersia, H.; Boulehmi, H.; Hamrouni, K. Development of intelligent systems based on Bayesian regularization network and neuro-fuzzy models for mass detection in mammograms: A comparative analysis. Comput. Methods Programs Biomed. 2016, 126, 46–62. [Google Scholar] [CrossRef] [PubMed]
  99. Ng, H.; Ong, S.; Foong, K.; Goh, P.S.; Nowinski, W. Medical image segmentation using k-means clustering and improved watershed algorithm. In Proceedings of the 2006 IEEE Southwest Symposium on Image Analysis and Interpretation, Denver, CO, USA, 26–28 March 2006; pp. 61–65. [Google Scholar]
  100. Chen, C.W.; Luo, J.; Parker, K.J. Image segmentation via adaptive K-mean clustering and knowledge-based morphological operations with biomedical applications. IEEE Trans. Image Process. 1998, 7, 1673–1683. [Google Scholar] [CrossRef] [Green Version]
  101. Kamil, M.Y.; Salih, A.M. Mammography Images Segmentation via Fuzzy C-mean and K-mean. Int. J. Intell. Eng. Syst. 2019, 12, 22–29. [Google Scholar] [CrossRef]
  102. Ketabi, H.; Ekhlasi, A.; Ahmadi, H. A computer-aided approach for automatic detection of breast masses in digital mammogram via spectral clustering and support vector machine. Phys. Eng. Sci. Med. 2021, 44, 277–290. [Google Scholar] [CrossRef]
  103. Kumar, S.N.; Fred, A.L.; Varghese, P.S. Suspicious lesion segmentation on brain, mammograms and breast MR images using new optimized spatial feature based super-pixel fuzzy c-means clustering. J. Digit. Imaging 2019, 32, 322–335. [Google Scholar] [CrossRef]
  104. Chowdhary, C.L.; Acharjya, D. Segmentation of mammograms using a novel intuitionistic possibilistic fuzzy c-mean clustering algorithm. In Nature Inspired Computing; Springer: Singapore, 2018; pp. 75–82. [Google Scholar]
  105. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
  106. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and OTHER Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  107. Ak, M.F. A comparative analysis of breast cancer detection and diagnosis using data visualization and machine learning applications. Healthcare 2020, 8, 111. [Google Scholar] [CrossRef]
  108. Tharwat, A.; Hassanien, A.E.; Elnaghi, B.E. A BA-based algorithm for parameter optimization of support vector machine. Pattern Recognit. Lett. 2017, 93, 13–22. [Google Scholar] [CrossRef]
  109. Liu, X.; Mei, M.; Liu, J.; Hu, W. Microcalcification detection in full-field digital mammograms with PFCM clustering and weighted SVM-based method. EURASIP J. Adv. Signal Process. 2015, 2015, 1–13. [Google Scholar] [CrossRef] [Green Version]
  110. de Nazaré Silva, J.; de Carvalho Filho, A.O.; Silva, A.C.; De Paiva, A.C.; Gattass, M. Automatic detection of masses in mammograms using quality threshold clustering, correlogram function, and SVM. J. Digit. Imaging 2015, 28, 323–337. [Google Scholar] [CrossRef] [Green Version]
  111. Ancy, C.; Nair, L.S. An efficient CAD for detection of tumour in mammograms using SVM. In Proceedings of the 2017 IEEE International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 6–8 April 2017; pp. 1431–1435. [Google Scholar]
  112. Qayyum, A.; Basit, A. Automatic breast segmentation and cancer detection via SVM in mammograms. In Proceedings of the 2016 IEEE International Conference on Emerging Technologies (ICET), Islamabad, Pakistan, 18–19 October 2016; pp. 1–6. [Google Scholar]
  113. Sharma, S.; Khanna, P. Computer-aided diagnosis of malignant mammograms using Zernike moments and SVM. J. Digit. Imaging 2015, 28, 77–90. [Google Scholar] [CrossRef]
  114. Vijayarajeswari, R.; Parthasarathy, P.; Vivekanandan, S.; Basha, A.A. Classification of mammogram for early detection of breast cancer using SVM classifier and Hough transform. Measurement 2019, 146, 800–805. [Google Scholar] [CrossRef]
  115. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  116. Lee, J.; Nishikawa, R.M. Automated mammographic breast density estimation using a fully convolutional network. Med. Phys. 2018, 45, 1178–1190. [Google Scholar] [CrossRef]
  117. Xu, S.; Adeli, E.; Cheng, J.Z.; Xiang, L.; Li, Y.; Lee, S.W.; Shen, D. Mammographic mass segmentation using multichannel and multiscale fully convolutional networks. Int. J. Imaging Syst. Technol. 2020, 30, 1095–1107. [Google Scholar] [CrossRef]
  118. Hai, J.; Qiao, K.; Chen, J.; Tan, H.; Xu, J.; Zeng, L.; Shi, D.; Yan, B. Fully convolutional densenet with multiscale context for automated breast tumor segmentation. J. Healthc. Eng. 2019, 2019. [Google Scholar] [CrossRef] [Green Version]
  119. Sathyan, A.; Martis, D.; Cohen, K. Mass and Calcification Detection from Digital Mammograms Using UNets. In Proceedings of the 2020 7th IEEE International Conference on Soft Computing & Machine Intelligence (ISCMI), Stockholm, Sweden, 14–15 November 2020; pp. 229–232. [Google Scholar]
  120. Li, S.; Dong, M.; Du, G.; Mu, X. Attention dense-u-net for automatic breast mass segmentation in digital mammogram. IEEE Access 2019, 7, 59037–59047. [Google Scholar] [CrossRef]
  121. AlGhamdi, M.; Abdel-Mottaleb, M.; Collado-Mesa, F. Du-net: Convolutional network for the detection of arterial calcifications in mammograms. IEEE Trans. Med. Imaging 2020, 39, 3240–3249. [Google Scholar] [CrossRef]
  122. Xiao, H.; Wang, Q.; Liu, Z.; Huang, J.; Zhou, Y.; Zhou, Y.; Xu, W. CSABlock-based Cascade RCNN for Breast Mass Detection in Mammogram. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Korea, 16–19 December 2020; pp. 2120–2124. [Google Scholar]
  123. Zhang, L.; Li, Y.; Chen, H.; Cheng, L. Mammographic Mass Detection by Bilateral Analysis Based on Convolution Neural Network. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 784–788. [Google Scholar]
  124. Agarwal, R.; Díaz, O.; Yap, M.H.; Lladó, X.; Martí, R. Deep learning for mass detection in Full Field Digital Mammograms. Comput. Biol. Med. 2020, 121, 103774. [Google Scholar] [CrossRef]
  125. Ben-Ari, R.; Akselrod-Ballin, A.; Karlinsky, L.; Hashoul, S. Domain specific convolutional neural nets for detection of architectural distortion in mammograms. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 552–556. [Google Scholar]
  126. Zhang, Z.; Wang, Y.; Zhang, J.; Mu, X. Comparison of multiple feature extractors on Faster RCNN for breast tumor detection. In Proceedings of the 2019 8th IEEE International Symposium on Next Generation Electronics (ISNE), Zhengzhou, China, 9–10 October 2019; pp. 1–4. [Google Scholar]
  127. Bhatti, H.M.A.; Li, J.; Siddeeq, S.; Rehman, A.; Manzoor, A. Multi-detection and Segmentation of Breast Lesions Based on Mask RCNN-FPN. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Korea, 16–19 December 2020; pp. 2698–2704. [Google Scholar]
  128. Fan, M.; Li, Y.; Zheng, S.; Peng, W.; Tang, W.; Li, L. Computer-aided detection of mass in digital breast tomosynthesis using a faster region-based convolutional neural network. Methods 2019, 166, 103–111. [Google Scholar] [CrossRef]
  129. Ribli, D.; Horváth, A.; Unger, Z.; Pollner, P.; Csabai, I. Detecting and classifying lesions in mammograms with deep learning. Sci. Rep. 2018, 8, 1–7. [Google Scholar] [CrossRef] [Green Version]
  130. Wu, E.; Wu, K.; Cox, D.; Lotter, W. Conditional infilling GANs for data augmentation in mammogram classification. In Image Analysis for Moving Organ, Breast, and Thoracic Images; Springer: Berlin/Heidelberg, Germany, 2018; pp. 98–106. [Google Scholar]
  131. Wu, E.; Wu, K.; Lotter, W. Synthesizing lesions using contextual GANs improves breast cancer classification on mammograms. arXiv 2020, arXiv:2006.00086. [Google Scholar]
  132. Shen, T.; Hao, K.; Gou, C.; Wang, F.Y. Mass Image Synthesis in Mammogram with Contextual Information Based on GANs. Comput. Methods Programs Biomed. 2021, 106019. [Google Scholar] [CrossRef]
  133. Becker, A.S.; Jendele, L.; Skopek, O.; Berger, N.; Ghafoor, S.; Marcon, M.; Konukoglu, E. Injecting and removing malignant features in mammography with CycleGAN: Investigation of an automated adversarial attack using neural networks. arXiv 2018, arXiv:1811.07767. [Google Scholar]
  134. Korkinof, D.; Rijken, T.; O’Neill, M.; Yearsley, J.; Harvey, H.; Glocker, B. High-resolution mammogram synthesis using progressive generative adversarial networks. arXiv 2018, arXiv:1807.03401. [Google Scholar]
  135. Kallenberg, M.; Petersen, K.; Nielsen, M.; Ng, A.Y.; Diao, P.; Igel, C.; Vachon, C.M.; Holland, K.; Winkel, R.R.; Karssemeijer, N.; et al. Unsupervised deep learning applied to breast density segmentation and mammographic risk scoring. IEEE Trans. Med. Imaging 2016, 35, 1322–1331. [Google Scholar] [CrossRef]
  136. Yang, D.; Wang, Y.; Jiao, Z. Asymmetry Analysis with sparse autoencoder in mammography. In Proceedings of the International Conference on Internet Multimedia Computing and Service, Xi’an, China, 19–21 August 2016; pp. 287–291. [Google Scholar]
  137. Petersen, K.; Chernoff, K.; Nielsen, M.; Ng, A.Y. Breast density scoring with multiscale denoising autoencoders. In Proceedings of the STMI Workshop at 15th Int. Conf. Medical Image Computing and Computer Assisted Intervention (MICCAI), Nice, Italy, 5 October 2012. [Google Scholar]
  138. Selvathi, D.; Poornila, A.A. Breast cancer detection in mammogram images using deep learning technique. Middle-East J. Sci. Res. 2017, 25, 417–426. [Google Scholar]
  139. Selvathi, D.; AarthyPoornila, A. Performance analysis of various classifiers on deep learning network for breast cancer detection. In Proceedings of the 2017 IEEE International Conference on Signal Processing and Communication (ICSPC), Coimbatore, India, 28–29 July 2017; pp. 359–363. [Google Scholar]
  140. Taghanaki, S.A.; Kawahara, J.; Miles, B.; Hamarneh, G. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification. Comput. Methods Programs Biomed. 2017, 145, 85–93. [Google Scholar] [CrossRef]
  141. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  142. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  143. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  144. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [Green Version]
  145. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Alan Apt: Madrid, Spain, 2020. [Google Scholar]
  146. Samala, R.K.; Chan, H.P.; Hadjiiski, L.M.; Helvie, M.A.; Cha, K.H.; Richter, C.D. Multi-task transfer learning deep convolutional neural network: Application to computer-aided diagnosis of breast cancer on mammograms. Phys. Med. Biol. 2017, 62, 8894. [Google Scholar] [CrossRef]
  147. Murtaza, G.; Shuib, L.; Wahab, A.W.A.; Mujtaba, G.; Nweke, H.F.; Al-garadi, M.A.; Zulfiqar, F.; Raza, G.; Azmi, N.A. Deep learning-based breast cancer classification through medical imaging modalities: State of the art and research challenges. Artif. Intell. Rev. 2020, 53, 1655–1720. [Google Scholar] [CrossRef]
  148. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar]
  149. Ratner, A.J.; Ehrenberg, H.R.; Hussain, Z.; Dunnmon, J.; Ré, C. Learning to compose domain-specific transformations for data augmentation. Adv. Neural Inf. Process. Syst. 2017, 30, 3239. [Google Scholar]
  150. Hussain, Z.; Gimenez, F.; Yi, D.; Rubin, D. Differential data augmentation techniques for medical imaging classification tasks. In AMIA Annual Symposium Proceedings; American Medical Informatics Association: Bethesda, MD, USA, 2017; Volume 2017, p. 979. [Google Scholar]
  151. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  152. Singh, V.K.; Romani, S.; Rashwan, H.A.; Akram, F.; Pandey, N.; Sarker, M.M.K.; Abdulwahab, S.; Torrents-Barrena, J.; Saleh, A.; Arquez, M.; et al. Conditional generative adversarial and convolutional networks for X-ray breast mass segmentation and shape classification. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 833–840. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.