Next Article in Journal
Phenotype-First Diagnostic Framework for Tracking Fluoroquinolone Resistance in Escherichia coli
Previous Article in Journal
The Clinical Relevance of Mast Cell Activation in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

From Machine Learning to Ensemble Approaches: A Systematic Review of Mammogram Classification Methods

by
Hanifah Rahmi Fajrin
1,2 and
Se Dong Min
1,3,*
1
Department of Software Convergence, Soon Chun Hyang University, Asan 31538, Republic of Korea
2
Department of Medical Electronics Technology, Universitas Muhammadiyah Yogyakarta, Yogyakarta 55183, Indonesia
3
Department of Medical IT Engineering, Soon Chun Hyang University, Asan 31538, Republic of Korea
*
Author to whom correspondence should be addressed.
Diagnostics 2025, 15(22), 2829; https://doi.org/10.3390/diagnostics15222829 (registering DOI)
Submission received: 2 October 2025 / Revised: 30 October 2025 / Accepted: 2 November 2025 / Published: 7 November 2025
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

Background/Objectives: Breast cancer remains one of the leading causes of mortality among women, necessitating continued advancements in diagnostic methods to enhance early detection and treatment outcomes. This review explores the current landscape of breast cancer classification, focusing on machine learning (ML), deep learning (DL), and hybrid/ensemble models. Methods: A systematic search following PRISMA guidelines identified 50 eligible studies published between 2018 and 2025. Studies were included based on their use of mammogram datasets and implementation of computer-aided diagnosis methods for classification. Models were compared in terms of preprocessing, feature extraction, optimization strategies, and classification performance. Results: Representative high performing models illustrate the strengths and limitations of each approach. In ML, an optimized ELM achieved 100% accuracy on MIAS. DL methods such as Vision Transformers also reached 100% accuracy on DDSM, outperforming conventional CNNs. Hybrid models, particularly IEUNet++, achieved 99.87% accuracy, offering robust multi class classification. Conclusions: While ML and DL approaches can achieve near perfect accuracy, they typically focus on binary classification tasks and require extensive preprocessing, feature extraction, and optimization. In contrast, hybrid methods provide comparable or superior performance while simultaneously addressing multi-classification with fewer handcrafted steps, highlighting their robustness. These findings underscore the need for innovative solutions that balance model accuracy, interpretability, and resource efficiency. By addressing these challenges, future classification systems can better support early breast cancer detection and improve patient outcomes.

1. Introduction

Breast cancer continues to pose a major global health challenge, affecting millions of women each year and ranking among the leading causes of cancer-related deaths [1]. In 2020, the World Health Organization reported 2.3 million new cases of breast cancer and approximately 685,000 fatalities [2]. The critical role of early detection in improving patient outcomes cannot be overstated, as timely diagnosis can raise the survival rates to nearly 90% [3]. Considering this, the development of advanced diagnostic tools is essential to support clinicians in making accurate and timely decisions.
Among the processes integral to breast cancer diagnostics is the classification of mammographic images, which involves distinguishing between normal, benign, malignant, and both benign and malignant tissues [4] (Figure 1). Effective classification aids in guiding clinical decisions for further examinations or treatments [5]. However, this task is not without its complexities. The diverse presentation of tumor characteristics, such as size, shape, and tissue density [6], poses significant challenges, particularly in cases involving dense breast tissue where visual differentiation is difficult [7].
To enhance the accuracy of classification and support radiologists, Computer-Aided Diagnosis (CAD) systems have become significant. These systems incorporate a range of classification methods that aim to improve diagnostic efficiency and reduce the likelihood of misinterpretation [8]. Traditional classification techniques, while foundational, often struggle with handling the heterogeneous nature of mammogram data. Machine learning (ML) approaches have addressed some of these limitations by enabling models to learn from data and adapt to complex patterns [9]. More recently, deep learning (DL) models, particularly convolutional neural networks (CNNs), have shown notable success in automating feature extraction and achieving high classification accuracy. Despite their promise, DL models can be limited by their need for large, well-annotated datasets and the significant computational power required for training [10]. In response to these challenges, hybrid and ensemble models have been explored as a means to optimize classification outcomes. By combining the strengths of traditional ML algorithms and DL architectures, hybrid/ensemble models aim to deliver enhanced accuracy and adaptability [11].
Previously, several review articles have addressed the application of machine learning and deep learning in breast cancer classification using mammograms [8,9,10,11,12,13]. For instance, refs. [9,10] provided overviews of ML/DL applications but did not explore ensemble learning in depth. Refs. [8,12] mainly focused on CAD systems, lacking comparative analysis across recent classification models. Other reviews such as refs. [11,13] offer broad discussions but do not highlight research gaps or trends in ensemble methods for mammogram classification. Unlike existing review papers that broadly cover breast cancer detection or combine various imaging modalities, this review focused exclusively on mammogram-based classification using machine learning, deep learning, and hybrid/ensemble methods. It proposes a structured taxonomy, highlights performance trade-offs, and critically compares the strengths and limitations of each approach. To the best of our knowledge, no prior review has offered such a focused and in-depth comparative analysis dedicated solely to classification techniques for mammogram images. The comparative analysis and clear taxonomy presented in this study can streamline model selection processes, support informed decision-making in computer-aided diagnosis systems, and foster further exploration in ensemble-based classification strategies. Accordingly, the primary contributions of this review can be summarized as follows:
  • An evaluation of a wide range of classification methods, from machine learning, deep learning to hybrid/ensemble models, applied specifically to breast cancer diagnosis using mammogram images, along with their integration into CAD systems.
  • Through a critical comparative analysis of recent works, the study highlights performance trends, trade-offs, and taxonomy that can assist researchers and practitioners in choosing appropriate models.
  • An exploration of limitations encountered in current research and practical implementation, followed by recommendations intended to guide future investigations and support the advancement of more effective detection tools.
The remainder of this review is structured as follows: Section 2 describes the methodology employed in this study, including the PRISMA framework and criteria for article selection. Section 3 presents a comprehensive overview of classification techniques used in mammogram analysis, including machine learning, deep learning, and hybrid or ensemble methods. Section 4 provides a critical discussion of the reviewed approaches, highlighting key insights, comparative observations, and gaps in the literature. Section 5 outlines the current challenges, potential opportunities, and future research directions in the field. Finally, Section 6 concludes the review by summarizing key findings and discussing their broader implications for research and clinical practice.

2. Materials and Methods

This systematic review adhered to the PRISMA framework for a rigorous and transparent selection of relevant studies [14]. The PRISMA flow diagram (Figure 2) illustrates the three main stages: identification, screening, and inclusion.
  • Identification: A comprehensive search was conducted in four major databases—Scopus, PubMed, SpringerLink, and IEEE Xplore—to identify studies on mammogram-based computer-aided detection (CAD). The search covered publications from 2018 to 2025 using keywords such as “CAD for mammogram”, “mammogram classification”, “deep learning mammogram”, “machine learning mammogram”, and “hybrid/ensemble mammogram”. A total of 4972 records were retrieved (Scopus = 2570; PubMed = 860; SpringerLink = 1107; IEEE Xplore = 435). Before screening, 2867 records were removed, including 528 non-original articles (e.g., reviews, editorials), 2033 duplicates, and 306 studies published before 2018.
  • Screening and eligibility assessment: After initial removals, 2105 records underwent title and abstract screening to remove studies that were clearly irrelevant to mammogram-based segmentation or classification. This step excluded 1990 records. The remaining 115 studies were then subjected to a full-text eligibility assessment. Articles were excluded at this stage if they:
    • were non-journal publications (e.g., conference papers, book chapters),
    • lacked a primary focus on classification (e.g., preprocessing techniques, feature extraction/selection, segmentation, optimization algorithms),
    • used imaging modalities other than mammography,
    • were inaccessible due to paywalls, or
    • did not provide sufficient methodological or result details relevant to classification.
As a result, 65 full-text articles were excluded.
3.
Inclusion: After applying these criteria, 50 studies were included in the review, offering insights into various classification techniques pertinent to mammogram-based breast cancer detection.

3. Results

In this section, we present the main findings of our systematic review on mammogram classification methods. The results are organized to highlight how different computational approaches, ranging from traditional machine learning (ML) to deep learning (DL) and hybrid/ensemble methods that have been applied in breast cancer detection. Each classification strategy has its own strengths in handling the unique challenges of breast cancer imaging, such as distinguishing subtle variations in tumor appearance and adapting to diverse imaging conditions [12]. To illustrate this categorization, Figure 3 presents a taxonomy of mammogram classification methods, where representative algorithms are shown for each group. For instance, ML-based approaches are represented by SVM, ELM, RF, and DT; DL-based approaches include CNN variants such as ResNet, DenseNet, YOLO, and Transformers; while hybrid/ensemble approaches combine models (e.g., CNN–SVM, CNN–ELM, or optimization-based ensembles). Following this taxonomy, the subsequent Section 3.1, Section 3.2 and Section 3.3 present a detailed discussion of ML, DL, and hybrid/ensemble-based classification approaches.

3.1. Machine Learning (ML)-Based Classification

Machine Learning (ML) has transformed breast cancer detection by allowing for the development of models that can learn patterns from mammogram data and classify images into some categories [15]. ML-based classifiers rely on advanced algorithms such as Support Vector Machines (SVM), Extreme Learning Machines (ELM), Random Forests (RF), and k-Nearest Neighbor (k-NN). A summary of machine learning-based classification methods, including their algorithms, datasets, and performance outcomes, is presented in Table 1.
Support Vector Machine (SVM) is one of the most widely used classifiers in breast cancer detection, and it has been employed by several researchers with varying results. Avcı & Karakaya [17] applied SVM to the MIAS dataset, using k-means clustering for segmentation and extracting texture features such as Gray-Level Co-occurrence Matrix (GLCM) and Gray-Level Run Length Matrix (GLRLM). Their model performed well in distinguishing benign from malignant tumors, although the small size of the MIAS dataset limited the generalizability of their results. Meanwhile, Ketabi et al. [18] working with the DDSM dataset, also employed SVM but used spectral clustering for segmentation and optimized their feature set using a Genetic Algorithm (GA), which reduced the feature set from 65 to 21. They achieved 90% accuracy but encountered difficulties with complex mass boundaries and overlapping tissues, limiting the model’s effectiveness in heterogeneous images. Sha et al. [19] adopted a different approach, using SVM classifier after feature extraction was conducted by Convolutional Neural Network (CNN) and optimizing the features with the Grasshopper Optimization Algorithm (GOA). Tested on the MIAS and DDSM datasets, this model achieved a high accuracy of 92%, with sensitivity and specificity both surpassing the results of [17]. However, the computational cost of the model in [19] was significantly higher due to the complexity of the optimization process.
Another popular ML classifier is the Extreme Learning Machine (ELM), known for its efficiency in training large datasets. Both Wang et al. (2019) [16] and Muduli et al. (2021) [21] used ELM for classifying breast masses, but they approached feature extraction and model design differently. Wang et al. [16] used ELM on a private dataset of 400 mammograms, extracting features using a CNN, which focused on the morphology, texture, and density of breast masses. ELM achieved a classification accuracy of 96.2%, with high sensitivity and specificity, though the study noted that manual feature extraction introduced variability depending on expert input [16]. Muduli et al. [21] adopted a more complex framework, integrating Particle Swarm Optimization (PSO) with ELM, which was tested on the MIAS, DDSM, and INbreast datasets. Their model achieved even higher accuracies of 98.94% on MIAS and 98.76% on DDSM and INbreast due to the combination of Fast Discrete Curvelet Transform (FDCT) for feature extraction and PCA for dimensionality reduction; this research’s scheme can be seen in Figure 4. However, the complexity of this model and its high computational cost were noted as significant drawbacks, particularly for large-scale clinical implementation [21].
Random Forest (RF) is another widely used ML classifier in mammogram classification due to its robustness and ability to handle large datasets. Avcı and Karakaya [17] tested RF on the MIAS dataset, alongside other classifiers like SVM and k-Nearest Neighbor (k-NN), after segmenting the images using k-means clustering and extracting texture-based features such as GLCM and GLRLM. RF showed competitive performance compared to SVM, and while the MIAS dataset’s limited size impacted the model’s ability to generalize, the RF approach proved useful in managing feature variability. Thawkar & Ingolikar [22] applied RF to the DDSM dataset, using morphological and texture-based features for classification, and achieved an accuracy of 94.6%. RF’s ability to handle complex datasets without overfitting made it a valuable tool in this context, though the need for large, diverse datasets remains a limitation in ensuring robust generalization.
k-Nearest Neighbor (k-NN), though simpler than SVM or RF, has also been explored as an effective ML classifier. Sannasi et al. [20] applied Weighted k-NN (wKNN) to the MIAS and INbreast datasets, achieving an accuracy of 84.35% on MIAS and 83.19% on INbreast. To optimize performance, they used metaheuristic algorithms such as Particle Swarm Optimization (PSO), Dragonfly Optimization Algorithm (DFOA), and Crow-Search Optimization Algorithm (CSOA). Although k-NN’s simplicity is an advantage, the model’s performance was heavily influenced by the choice of optimization algorithm. DFOA required significant parameter tuning and showed slower convergence compared to PSO, increasing the model’s computational cost in classification. Finally, Decision Trees (DT) have been explored as a simpler, interpretable classification method. Thawkar and Ingolikar (2020) employed a decision tree model on the DDSM dataset, achieving an accuracy of 92.7%. Although decision trees provide transparency and are easy to interpret, they are prone to overfitting, particularly on small datasets [25].

3.2. Deep Learning (DL)-Based Classification

Deep learning (DL) methods have revolutionized mammogram classification, leveraging neural networks to automatically learn hierarchical features from images. An overview of deep learning models used in breast cancer classification can be found in Table 2.
Convolutional Neural Networks (CNNs) have been widely adopted for mammogram classification, with several studies utilizing different architectures. Han et al. (2024) [28], Liu et al. (2022) [30], and Shu et al. (2020) [31] all employed CNN-based models with DenseNet architectures for feature extraction. Han et al. [28] proposed a Deep Location Soft-Embedding-Based Network (DLSEN-RS), applying it to the CBIS-DDSM and INbreast datasets. The model achieved high accuracy, with an AUC of 0.962 and an accuracy of 91.5% for INbreast, and an AUC of 0.948 and accuracy of 89.4% for CBIS-DDSM. However, one limitation noted was the difficulty in determining the optimal number of features (k), which could impact model performance if not selected properly. Liu et al. [30] introduced a Deep Multiscale Multi-Instance Network for classification, also utilizing DenseNet for feature extraction. On the INbreast dataset, the model achieved an AUC of 0.975 and accuracy of 93.2%, outperforming Han’s model slightly, though the challenge of selecting optimal k values also persisted here. Shu et al. developed a Deep Neural Network with Region-Based Pooling and applied it to both the INbreast and CBIS-DDSM datasets, achieving an AUC of 0.982 and accuracy of 91.6% on INbreast, and an AUC of 0.882 with an accuracy of 83.9% on CBIS-DDSM. Shu’s model focused on region-based pooling, but this technique significantly increased the processing time and computational resource requirements, which may hinder real-time applications [31].
Another significant contribution came from Nasir Khan et al. (2019) [33], who proposed a Multi-View Feature Fusion Model using various CNN architectures, including VGGNet, ResNet, and GoogLeNet, to classify mammogram images from the MIAS and CBIS-DDSM datasets. This multi-view approach incorporated images from different angles of the breast, achieving an AUC of 0.932 for detecting masses and calcifications, and an AUC of 0.84 for distinguishing between malignant and benign cases [33]. However, transfer learning is another approach that has gained popularity in mammogram classification. Le et al. (2024) applied ResNet-34, pre-trained on ImageNet, to classify images from the DDSM and Hanoi Medical University (HMU) datasets. With transfer learning, they achieved a macro-AUC of 0.766 on the HMU dataset. Although transfer learning allowed them to leverage pre-trained networks for faster convergence and higher performance, the availability of annotated mammogram datasets remained a limitation, affecting the fine-tuning process and overall generalizability of the model [34].
YOLO models have also been adapted for mammogram classification. Anas et al. (2024) [29] applied an enhanced YOLOv5 network combined with Mask R-CNN for classification on the INbreast, CBIS-DDSM, and BNS datasets, reporting a false positive rate (FPR) of 0.049% and a false negative rate (FNR) of 0.029%. The model also achieved an impressive Matthews Correlation Coefficient (MCC) of 92.02%, although the computational complexity of training two networks (YOLOv5 and Mask R-CNN) simultaneously was highlighted as a major limitation. Meanwhile, ref. [50] utilized YOLOv5 for lesion detection and classification on the CBIS-DDSM and INbreast datasets, achieving a mean Average Precision (mAP) of 0.835 on INbreast and 0.498 on CBIS-DDSM (the flowchart can be seen in Figure 5). While the results were promising, the YOLO model tended to be biased toward smaller lesions, potentially missing larger, more complex tumors.

3.3. Hybrid/Ensemble Classification Methods

Ensemble/hybrid techniques combine multiple classifiers or integrate different machine learning models to leverage their strengths, thereby enhancing classification performance [51]. Table 3 provides an overview of method used in hybrid/ensemble approaches.
SVM and CNN combinations are a common hybrid approach employed to enhance classification performance. Ahmad et al. [52] developed a hybrid model called BreastNet-SVM, which combines a modified AlexNet CNN and an SVM classifier for final classification. Applied to the DDSM dataset, this model achieved an impressive accuracy of 99.16%, with a sensitivity of 97.13% and a specificity of 99.30%. Despite these high results, the performance of the model was sensitive to the choice of optimizers and hyperparameter tuning, which could affect the generalizability of the model. Similarly, ref. [55] combined a CNN with ELM classifier for breast cancer detection on the MIAS dataset, achieving an accuracy of 86%. However, the study emphasized the need for validation on larger and more diverse datasets. Furthermore, in the study conducted by [61], the authors explored a comparison between standalone SVM and ANN methods versus their hybrid model, SVM-ANN, for classifying mammogram images. Using the Mini-MIAS dataset, which consisted of 80 normal, 40 benign, and 40 malignant mammograms, the researchers found that standalone SVM achieved a classification accuracy of 78.8% for distinguishing normal from abnormal cases and 71.3% for benign versus malignant. The ANN, on the other hand, performed slightly better with 83.1% accuracy for normal/abnormal classification and 78.8% for benign/malignant. Notably, the hybrid SVM-ANN model significantly outperformed both standalone methods, achieving an impressive 99.4% accuracy for normal versus abnormal.
Another approach integrating ensemble learning with feature weighting algorithms was proposed by [54]. They applied an ensemble model consisting of k-Nearest Neighbor (k-NN), bagging, and EigenClass algorithms, using a majority voting rule for classification. Their model was applied to both the MIAS and DDSM datasets, achieving an accuracy of 93.26% on DDSM and 91% on MIAS. This ensemble model benefited from the diversity of classifiers, but the computational complexity introduced by both the ensemble framework and feature weighting algorithms posed challenges, particularly in terms of processing time. Several studies also explored hybrid models combining optimization algorithms with classifiers. Muduli et al. (2020) [67] proposed a hybrid Moth Flame Optimization (MFO)-ELM model that combined the ELM classifier with the MFO algorithm to optimize the hidden layer weights and biases. Applied to both the MIAS and DDSM datasets, the model achieved excellent performance, with an accuracy of 99.76% for normal vs. abnormal classification and 98.80% for benign vs. malignant classification on the MIAS dataset. Despite these impressive results, the random initialization of ELM parameters occasionally introduced instability, which could affect the model’s reliability, though this was mitigated by the optimization algorithm.
Kalpana and Selvy (2024) [56] also utilized hybrid/ensemble techniques, proposing an ensemble model combining Naïve Bayes, Firefly Binary Grey Optimization (FBGO), and a Transfer-CNN (TCNN) coupled with Moth Flame Lion Optimization (MMFLO). Applied to the MIAS, INbreast, and BCDR datasets, the model achieved an accuracy of 96.3% with Naïve Bayes and 98% with TCNN. The study highlighted the complexity of combining multiple classifiers, with the computational load increasing significantly when blending Naïve Bayes with TCNN. Nevertheless, the ensemble model’s ability to perform well across multiple datasets demonstrated its versatility. In another study, Chakravarthy et al. (2024) [57] applied a hybrid approach by combining features extracted from four different CNN architectures (VGG16, VGG19, ResNet50, and DenseNet121) and merging them for final classification (the ensemble flowchart can be found in Figure 6). Tested on the MIAS, CBIS-DDSM, and INbreast datasets, their model achieved accuracy rates of 98.70% on MIAS, 97.73% on CBIS-DDSM, and 98.83% on INbreast. However, this approach introduced computational complexity due to the combination of multiple CNN models. The study also noted slight difficulties in discriminating between malignant and benign cases compared to normal cases, indicating that further refinement of the hybrid approach may be necessary for improving the classification of malignant cases. Lastly, ref. [60] investigated quantum transfer learning for breast cancer detection, applying a hybrid classical-quantum model that combined traditional neural networks with quantum enhancements. The study utilized the BCDR dataset, which consists of mammogram images categorized as benign or malignant. The proposed approach incorporated a quantum circuit attached to a pre-trained ResNet18 model, acting as a feature extractor while the quantum circuit performed the classification task. The results showed an accuracy of 84%, outperforming the classical standalone approach, which achieved 67% accuracy. The comparison highlighted that the hybrid classical-quantum model demonstrated improved generalization and faster convergence.

4. Discussion

Figure 7 illustrates the distribution of classification methods based on references in this article. In breast cancer classification, ResNet and Recurrent Neural Network (RNN), as deep learning models, represented a significant portion of studies. ResNet’s deep architecture, especially ResNet152V2, excels in feature extraction and classification due to its residual connections, which capture complex patterns in mammograms. For example, ref. [43] utilized a ResNet152V2-based approach within a three-step framework, achieving perfect accuracy for breast density and tumor malignancy classification, while RNNs enhanced temporal data handling, making them particularly useful for tracking changes across mammogram slices, with an accuracy of 98% for tumor classification (benign and malignant).
Other deep learning CNN architectures, including VGG, Channel Attention, AlexNet, and DenseNet, remain widely used in breast cancer classification due to their unique strengths in feature extraction and image analysis. VGG, appearing in 8% of studies, is favored for its straightforward yet deep structure, enabling detailed feature extraction with manageable computational demands, ideal for varied research settings. Channel Attention, used in 4% of studies, enhances classification accuracy by focusing on critical regions in mammograms, such as calcifications or tumor borders, which are diagnostically significant. AlexNet and DenseNet, each also employed in 4% of studies, contribute hierarchical and densely connected layers, respectively, to improve feature propagation in complex analyses. AlexNet’s multi-layered feature extraction is effective for capturing various levels of detail, while DenseNet’s connectivity facilitates the learning of intricate patterns, especially in mammograms with subtle or complex textures.
Meanwhile, hybrid/ensemble approaches prove effective by combining CNNs and SVMs. In one example, ref. [59] integrated EfficientNet-B7 and ConvNeXt-101, achieving high AUC scores (up to 0.98) across multiple datasets. This method allows for adaptable feature representation across diverse textures in mammograms, reducing false positives and ensuring reliable diagnostic results. SVM Combination methods, representing 11% of studies, are used to leverage SVM’s effectiveness in binary classification along with deep learning models for feature extraction. Ahmad et al. (2023) combined SVM with AlexNet model, achieving an accuracy of 99.16% on the DDSM dataset for benign versus malignant classification [52]. This hybrid approach combines the precision of SVM with the comprehensive feature extraction of CNNs, improving overall diagnostic accuracy.
The standalone SVM machine learning approach, widely used by researchers in around 8% of studies, remains a reliable option for straightforward classification tasks. Sha et al. (2020) utilized SVM on the MIAS and DDSM datasets, achieving 92% accuracy. SVM’s simplicity makes it a suitable choice for smaller datasets where classes are well-separated [19]. Other Machine learning-based methods, ELM, RF, and KNN, each represented 6% of studies. Mohanty et al. (2020) [24] implemented ELM, achieving over 99% accuracy across the MIAS, DDSM, and BCDR datasets. ELM’s fast training makes it suitable for scenarios requiring rapid classification. Random Forest, which builds multiple decision trees to reduce overfitting, is effective for complex datasets, while KNN provides a simple, yet robust classification based on nearest-neighbor analysis, ideal for binary tasks.
All in all, these methods reflect the diversity of classification techniques in breast cancer detection. ML models like SVM remain practical for well-defined classification tasks [68], while deep learning architectures such as ResNet and VGG excel in recognizing intricate textures [69]. Hybrid approaches, combining strengths from different models, provide adaptable solutions across varied mammographic imaging challenges, ensuring an optimal balance between accuracy and efficiency. The inclusion of feature extraction and optimization techniques further boosts classification accuracy, as seen in the studies by [56,67]. For example, Naïve Bayes combined with Firefly Binary Grey Optimization (FBGO) achieved a 96.3% accuracy on the MIAS dataset, and TCNN combined with Moth Flame Lion Optimization (MMFLO) achieved 98%, underscoring the impact of optimizing classifier parameters for improved sensitivity and specificity [56]. Similarly, MFO-ELM, which integrates Lifting Wavelet Transform for feature extraction and optimizes ELM parameters, achieved near-perfect classification on the MIAS dataset with 99.76% accuracy for normal vs. abnormal cases, illustrating how optimized models refine feature capture and classification accuracy across diverse mammographic features [67]. Although computationally intensive, these optimizations are valuable for tasks requiring fine-tuned parameter adjustments in complex images (Table 4).
An in-depth comparative analysis of classification performance in breast cancer detection highlighted significant variations across machine learning, deep learning, and hybrid/ensemble approaches. As summarized in Table 4, machine learning (ML) models demonstrated accuracy levels ranging from 82.42% to 100%, with sensitivity reaching up to 99.1% and specificity extending to 98.72%. These results confirm ML’s reliability, particularly in binary classification tasks (e.g., normal vs. abnormal), when paired with well-extracted statistical features and classifiers such as Support Vector Machine or Random Forest. However, while the precision ranges between 82.42% and 83.87%, ML-based methods may underperform when dealing with class imbalance or subtle radiological variations in dense tissue regions.
Deep learning (DL) architectures exhibited an even broader accuracy range, between 70% and 100%, but often surpassed ML in terms of precision (up to 99.16%). Their strength lies in automated hierarchical feature extraction, allowing them to detect abstract imaging patterns that traditional models may overlook. Nonetheless, dependency on large, annotated datasets and high computational demands can hinder their adaptability in clinical settings with limited resources.
Interestingly, hybrid models that integrated the strengths of both ML and DL techniques achieved balanced and consistently high performance across the evaluation metrics. With the accuracy ranging from 74.96% to 99.87%, sensitivity from 96.2% to 97.77%, and specificity up to 99.8%, these models offer improved robustness, especially in multi-class classification scenarios (normal, benign, and malignant). Their ability to combine precise feature extraction with optimized decision-making layers makes them particularly suitable for complex mammographic analysis.
Table 5 presents a representative comparison of the top-performing methods across three main classification approaches: Machine Learning (ML), Deep Learning (DL), and Hybrid models. This table highlights a single, best-performing method within each category based on classification accuracy and reported evaluation metrics. This approach allows for a focused comparison of preprocessing steps, feature extraction/selection, optimization, and classification strategies employed by each method. In the ML-based approach proposed by [21], classification begins with extracting features from mammogram images using the Fast Discrete Curvelet Transform (FDCT), which is well-suited for capturing edge and texture details. To reduce redundancy and retain only the most discriminative information, dimensionality reduction is applied through Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). These processed features are then classified using an optimized version of Extreme Learning Machine (ELM), enhanced by a Modified Particle Swarm Optimization (MODPSO) algorithm. MODPSO dynamically adjusts the input weights and biases of ELM, addressing common limitations such as instability and overfitting. This optimization process significantly improves the classifier’s ability to converge quickly and generalize well across datasets. The method achieved exceptional accuracy, notably 100% on the MIAS dataset, and above 98% on the DDSM and INbreast datasets. However, this pipeline involves several complex stages—including handcrafted feature extraction, selection, and optimization—making it computationally demanding. Furthermore, it only addresses binary classification (benign vs. malignant), limiting its utility in multi-class diagnostic scenarios.
Meanwhile, for deep learning (DL) approaches, a perfect classification performance (100% accuracy, AUC, and F1-score) was achieved in distinguishing between benign and malignant cases. This result was obtained through the implementation of Vision Transformer (ViT) architectures, including Swin Transformer and Pyramid Vision Transformer (PVT). These models were trained using a transfer learning strategy, where pre-trained weights from ImageNet were fine-tuned on the DDSM mammography dataset. Unlike CNNs that rely on sequential, localized feature extraction, ViT treats the image as a sequence of patches, allowing for global contextual understanding from early layers, enhanced by self-attention mechanisms and positional embeddings. However, this approach also depends heavily on preprocessing techniques and data augmentation strategies to mitigate class imbalance. In comparison, traditional CNN-based models such as ResNet18 and EfficientNetB0 achieved lower performance, with AUC values ranging between 0.80 to 0.85 and accuracy scores between 90% to 95%, indicating the superior generalization and precision of ViT-based models for binary classification [42].
In the hybrid approach by [66], the authors proposed IEUNet++, a novel deep hybrid model that integrates InceptionResNet, EfficientNetB7, and a U-Net-based segmentation backbone. Unlike traditional ensemble strategies that combine independent CNN models through fuzzy or voting mechanisms, IEUNet++ leverages a unified encoder–decoder architecture with multi-scale feature fusion. This design allows for the simultaneous segmentation and classification of mammogram images, capturing both local lesion details and global contextual features. The model was evaluated on the MIAS, CBIS-DDSM, and INbreast datasets, achieving exceptionally high performance with an accuracy of 99.87% across normal, benign, and malignant categories. Compared to conventional CNN classifiers, IEUNet++ demonstrated superior robustness by reducing feature redundancy and enhancing the discriminative capacity for subtle lesion patterns.
These findings collectively show that while ML and DL approaches can yield excellent results, they often require extensive preprocessing, feature engineering, or optimization to reach their full potential—and are typically limited to binary classification tasks. In contrast, the hybrid approach demonstrated not only superior performance without additional optimization techniques, but also the capacity to effectively handle multi-class classification scenarios, making it a compelling candidate for broader clinical deployment.
Despite these advances in model performance, the reliability and generalizability of classification systems also depend on the datasets used for training and validation. Public mammogram datasets have therefore played a central role in enabling benchmarking, comparison across methods, and the reproducibility of research findings.
Several public datasets have been extensively employed for mammogram classification tasks, as summarized in Table 6. Among them, the MIAS dataset (https://www.repository.cam.ac.uk/items/b6a97f0c-3b9b-40ad-8f18-3d121eef1459) (accessed on 12 February 2025) remains the most frequently used benchmark, despite its relatively small size of 322 images, which limits generalization. The DDSM dataset provides over 10,000 mammograms, making it one of the largest available, although its older image quality poses challenges for modern algorithms. To address these issues, CBIS-DDSM (https://www.cancerimagingarchive.net/collection/cbis-ddsm/) (accessed on 12 February 2025), a curated subset of DDSM with improved annotations, has become popular in recent studies. The INbreast dataset (https://www.kaggle.com/datasets/ramanathansp20/inbreast-dataset) (accessed on 12 February 2025), while small (410 images), is highly valued for its high-quality, pixel-level annotations. Lastly, the BCDR dataset (https://service.tib.eu/ldmservice/dataset/bcdr) (accessed on 12 February 2025) offers region-of-interest annotations and a moderate image size, though it is less frequently adopted compared to MIAS and DDSM. Overall, these datasets form the foundation of most mammogram classification research, with MIAS and DDSM dominating usage, while INbreast and CBIS-DDSM provide higher-quality but smaller-scale alternatives. In the table, “All kind” refers to datasets that include a variety of lesion types commonly found in mammography, such as masses, microcalcifications, architectural distortions, and asymmetries. This diversity makes them particularly useful for developing models that can generalize across different manifestations of breast cancer, rather than being restricted to only one lesion type.
However, many reviewed studies achieved high accuracy while still relying heavily on small or older datasets such as MIAS and INbreast. These datasets, although valuable for benchmarking, present several technical limitations. Their limited sample sizes, imbalanced datasets [37,45], and homogeneous image characteristics increase the risk of overfitting [61,62], making the models less reliable when tested on new or heterogeneous populations [17,27]. Moreover, the narrow diversity in breast density, lesion appearance, and imaging conditions reduces the model’s capacity to learn generalized features, while the outdated quality of older datasets like DDSM introduces additional domain gaps compared to modern clinical images [18,47]. As several studies in this review acknowledged, these factors can lead to an inflated or dataset-specific performance that does not translate effectively to clinical settings. Therefore, future research should emphasize validation using larger, multi-institutional, and demographically diverse datasets, as well as cross-dataset and domain-shift evaluations to ensure robust and clinically meaningful performance, for instance, by utilizing newer datasets such as VinDr-Mammo (20,000 images) [70] and RSNA (over 50,000 images) [71], which provide more recent, high-resolution mammograms with standardized DICOM formats and comprehensive annotations.

5. Challenges, Opportunities, and Future Directions in Breast Cancer Detection

Breast cancer detection using mammograms presents numerous challenges across classification stages. As methods evolve from machine learning (ML)-based, deep learning (DL)-based, and hybrid/ensemble approaches, each step brings its own set of difficulties that can limit the effectiveness, scalability, and clinical adoption of these techniques. Based on the methods discussed in Section 3 and Section 4, this Section 5 explores the primary challenges faced in classification, and general aspects of breast cancer detection.

5.1. Classification Challenges

The challenges faced in classification, whether using ML-based, DL-based, or hybrid/ensemble models, are significant and closely tied to the success of segmentation. Key issues persist despite advancements in model development:
  • Feature Extraction: Classification models heavily depend on the quality of feature extraction [72,73]. Classifiers such as SVM and Decision Trees (DTs) rely on manual feature extraction techniques like GLCM or HoG, which may not capture the full complexity of tumor characteristics [74,75]. Even in DL models, where features are automatically learned, extracting meaningful features from small or low-contrast tumors remains a challenge [19,31].
  • Overfitting: Overfitting is a common issue in ML and DL classifiers, particularly when models are trained on small or imbalanced datasets like MIAS or INbreast [21,28]. Models such as SVM, ELM, and even advanced CNN-based classifiers tend to perform well on training data but often fail to generalize to new, unseen data [16,17]. Hybrid models that combine multiple classifiers also risk overfitting when trained on small datasets [57].
  • Computational resources and time: DL models and hybrid approaches often require significant computational resources for training and inference. Models such as YOLO combined with Mask R-CNN or DenseNet architectures are computationally expensive and may not be feasible for real-time clinical applications [29,30]. Moreover, hybrid approaches that combine optimization algorithms with classification models, such as MODPSO-ELM, can further increase training times, limiting clinical implementation [21].

5.2. Opportunities and Future Direction of Breast Cancer Detection

This following subsection outlines key strategies that hold potential for enhancing breast cancer detection systems, emphasizing improvements in feature extraction, optimization, and model adaptability. By exploring these strategies, researchers can refine existing methods, adapt to varying imaging conditions, and facilitate real-time clinical applications.
  • Combining Feature Extraction with Classification: Hybrid models that integrate DL-based feature extraction (e.g., using CNNs) with ML classifiers, such as SVM and Random Forest (RF), have shown notable improvements in classification performance [52,55]. This fusion allows for better utilization of the learned hierarchical features from DL models, while ML classifiers can handle the final decision-making step. In studies like those by [21,56], hybrid models significantly boosted the accuracy in both segmentation and classification. Future research should explore more efficient combinations of these models and identify which pairing yields the best results under varying conditions.
  • Optimization Algorithms and Metaheuristics: Many hybrid methods include the use of metaheuristic algorithms [76], such as Particle Swarm Optimization (PSO), Genetic Algorithms (GA), or Moth Flame Optimization (MFO), to optimize model parameters and enhance performance [67]. These algorithms have proven effective in tuning model weights and improving the learning process [77], particularly when dealing with complex datasets like MIAS, DDSM, and INbreast. Future work should focus on integrating more sophisticated optimization techniques, such as reinforcement learning [78] or evolutionary algorithms, to further refine breast cancer detection models.
  • Transfer Learning: Transfer learning has emerged as a key opportunity for leveraging pre-trained DL models, such as ResNet, DenseNet, and EfficientNet, to reduce the computational burden associated with training deep models from scratch [34]. These models, trained on large datasets like ImageNet, can be fine-tuned [69] for specific breast cancer detection tasks [79], which allows researchers to overcome the challenges of limited mammogram datasets. Transfer learning has shown promise in improving classification accuracy while reducing the training time. Beyond efficiency, transfer learning also improves representation quality. For example, ResNet/DenseNet/EfficientNet backbones preserve fine-grained image details through skip connections and multi-scale feature extraction layers [80], which is particularly beneficial for subtle or low-contrast mammographic lesions. By reusing pretrained convolutional filters that already capture edge, gradient, and texture patterns, the model can enhance lesion visibility even when intensity differences are minimal. Fine-tuning only the higher-level layers allows for adaptation to mammography’s domain characteristics, such as glandular tissue density and microcalcification patterns, while maintaining robust low-level representations learned from large-scale datasets. Accordingly, future directions should involve exploring more domain-specific pre-training techniques that are tailored to medical images [81], ensuring that models are better suited to the nuances of mammogram data.
  • Integration with Other Imaging Modalities: Another promising direction for future research is the integration of mammogram analysis with other imaging modalities [82], such as ultrasound and MRI. Combining information from multiple imaging [83] techniques could improve the accuracy and robustness of breast cancer detection models by providing complementary views of the same region, reducing the likelihood of false negatives [84]. Such multimodal fusion is especially valuable for subtle or low-contrast lesions that are difficult to identify on mammograms alone, as ultrasound and MRI provide richer tissue contrast, margin definition feature, and contextual cues that help delineate ambiguous structures [84,85,86]. Beyond imaging, multimodal learning can also combine mammograms with clinical records, pathology reports, or other tabular data, enabling richer feature representation and potentially improving diagnostic performance. Recent works [87] have shown that integrating imaging with structured clinical data enhances model generalization and supports more clinically relevant decision-making.
  • Real-time Application and Model Efficiency: A major future goal is to develop models that can be deployed in real-time clinical environments. Techniques such as model pruning [88], quantization, and knowledge distillation can be explored to reduce the size and computational requirements of deep learning models without sacrificing accuracy. These methods will be essential for integrating AI-driven breast cancer detection systems into everyday clinical workflows, especially in under-resourced healthcare settings. Recent studies have shown that pruning removes redundant connections, and quantization reduces precision from 32-bit to lower bit widths (e.g., INT8), significantly decreasing the inference latency and energy consumption without major accuracy loss [89,90,91].
  • Clinical Perspective and Translation: Beyond technical performance, clinical adoption is essential for breast cancer classification models. For real-world use, models must provide interpretability, reliability, and validation across diverse patient populations and imaging protocols. Interpretability can be supported through visualization tools such as Grad-CAM [36] or attention heatmaps [53], which help radiologists understand the model’s decision basis. Meanwhile, clinical reliability requires rigorous external validation using independent and multi-institutional datasets to verify robustness beyond training conditions [92]. In practice, this involves testing models on data from different hospitals or imaging devices, reporting results at clinically relevant operating points (e.g., maintaining high sensitivity with corresponding specificity), and evaluating whether AI assistance improves radiologist performance or reading efficiency. Difficult or uncertain cases should be referred to for manual review rather than automated decision-making. Finally, practical deployment also depends on the clear reporting of inference time, hardware needs, and integration into daily clinical workflow [93]. Integration into radiology practice further requires efficiency, regulatory approval, and minimization of false positives to ensure radiologist trust. While current systems show promising accuracy, most remain at the proof-of-concept stage, emphasizing the need for large-scale validation and collaboration between engineers and healthcare professionals [94].
  • Foundation and Large Vision Models: Although transfer learning models such as ResNet, DenseNet, and EfficientNet have demonstrated strong performance by reusing pretrained representations from large-scale datasets, recent advancements have shifted toward foundation models and large vision models (LVMs) that offer broader generalization and adaptability. Foundation models are large-scale deep architectures trained on massive and diverse images or multimodal datasets, enabling them to serve as general-purpose backbones that can be adapted to various medical imaging tasks with minimal fine-tuning. In medical imaging, LVMs such as Vision Transformers (ViT) [42], Segment Anything Model (SAM) [95], MedCLIP [96], and BioViL have shown strong potential in capturing fine-grained anatomical patterns, handling cross-domain variations, and linking visual features with textual clinical information. These models not only provide richer visual-semantic representations but also allow zero-shot or few-shot adaptation, which is particularly beneficial when labeled medical data are limited. Applying such models to mammography could improve lesion localization and classification performance by leveraging their multi-scale and multimodal understanding. Future research should explore how foundation and large vision models can be effectively fine-tuned, compressed, or adapted for mammographic imaging, balancing their computational cost with clinical feasibility while maintaining interpretability and reliability for real-world deployment.

6. Conclusions

The advancement of breast cancer classification has evolved significantly through machine learning (ML), deep learning (DL), and hybrid/ensemble approaches, each offering distinct strengths and facing unique challenges. ML models, including popular classifiers like SVM, RF, and ELM, remain effective, particularly when combined with feature extraction and optimization techniques, but they rely heavily on careful feature selection to avoid performance degradation. Deep learning models, notably CNN architectures and vision transformers, excel through automatic feature extraction and robust performance, but they are typically resource-intensive and sensitive to data quality and imbalance. Hybrid and ensemble methods integrate multiple classifiers and diverse learning strategies, achieving improved accuracy and robustness, even though increased complexity and computational requirements could hinder clinical applicability. This review highlighted that each category of classification methods presents specific advantages and limitations. For instance, DL methods typically demonstrate superior accuracy and generalization potential, but their training demands pose practical deployment challenges. Conversely, ML and hybrid/ensemble approaches offer interpretable and resource-efficient alternatives, though their performance may vary considerably depending on the dataset characteristics and preprocessing procedures. The challenges identified, such as the difficulty of extracting reliable features from subtle or low contrast lesions, the risk of overfitting on small or imbalanced datasets, and the computational burden of training and deploying complex models, emphasize the need for ongoing research towards more generalized and interpretable models. Future research should focus on advancing transfer learning, multimodal integration, and optimization techniques that reduce computational load while enhancing robustness and interpretability. Strengthening clinical validation across diverse populations and imaging protocols will be essential to ensure diagnostic systems that are not only accurate, but also clinically practical and trustworthy. Ultimately, future efforts should prioritize building systems that are not only high-performing in controlled experiments, but are also reliable, explainable, and feasible for integration into real clinical workflows. This step will be key to transforming current research outcomes into truly usable and trustworthy diagnostic tools.

Author Contributions

Conceptualization, methodology, writing—original draft, formal analysis, investigation and data curation, H.R.F.; supervision and validation, S.D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by BK21 FOUR (Fostering Outstanding Universities for Research) (No. 5199990914048) and Global–Learning & Academic Research Institution for Master’s-PhD students, and Postdocs (G-LAMP) Program of the National Research Foundation of Korea (NRF) grant funded by the Ministry of Education (No. RS-2025-25441283).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

This article contains no studies with human or animal subjects performed by the authors.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CADComputer Aided-Detection
MLMachine Learning
DLDeep Learning
SVMSupport Vector Machine
CNNConvolution Neural Network
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
RoIRegion of Interest
GLCMGray Level Co-occurrence Matrix
YOLOYou Only Look Once
IoUIntersection over Unit
AUCArea Under Cover
ROCReceiver Operating Characteristic
ELMExtreme Learning Machine
KNNk-Nearest Neighbors
MFOMoth Flame Optimization
PSOParticle Swarm Optimization
ECAEfficient Channel Attention
MLOMediolateral-Oblique
CCCranio-Caudal

References

  1. Ferlay, J.; Ervik, M.; Lam, F.; Laversanne, M.; Colombet, M.; Mery, L.; Piñeros, M.; Znaor, A.; Soerjomataram, I.; Bray, F. Global Cancer Observatory: Cancer Today. Available online: https://gco.iarc.who.int/today (accessed on 4 February 2025).
  2. Arnold, M.; Morgan, E.; Rumgay, H.; Mafra, A.; Singh, D.; Laversanne, M.; Vignat, J.; Gralow, J.R.; Cardoso, F.; Siesling, S.; et al. Current and Future Burden of Breast Cancer: Global Statistics for 2020 and 2040. Breast 2022, 66, 15–23. [Google Scholar] [CrossRef]
  3. American Cancer Society Understanding a Breast Cancer Diagnosis. Available online: https://www.cancer.org/Cancer/Breast-Cancer/About/Types-of-Breast-Cancer.Html#References (accessed on 4 November 2024).
  4. Rezaei, Z. A Review on Image-Based Approaches for Breast Cancer Detection, Segmentation, and Classification. Expert. Syst. Appl. 2021, 182, 115204. [Google Scholar] [CrossRef]
  5. Meenalochini, G.; Ramkumar, S. A Deep Learning Based Breast Cancer Classification System Using Mammograms. J. Electr. Eng. Technol. 2024, 19, 2637–2650. [Google Scholar] [CrossRef]
  6. López-Úbeda, P.; Martín-Noguerol, T.; Paulano-Godino, F.; Luna, A. Comparative Evaluation of Image-Based vs. Text-Based vs. Multimodal AI Approaches for Automatic Breast Density Assessment in Mammograms. Comput. Methods Programs Biomed. 2024, 255, 108334. [Google Scholar] [CrossRef] [PubMed]
  7. Ranjbarzadeh, R.; Dorosti, S.; Jafarzadeh Ghoushchi, S.; Caputo, A.; Tirkolaee, E.B.; Ali, S.S.; Arshadi, Z.; Bendechache, M. Breast Tumor Localization and Segmentation Using Machine Learning Techniques: Overview of Datasets, Findings, and Methods. Comput. Biol. Med. 2023, 152, 106443. [Google Scholar] [CrossRef] [PubMed]
  8. Ramadan, S.Z. Methods Used in Computer-Aided Diagnosis for Breast Cancer Detection Using Mammograms: A Review. J. Healthc. Eng. 2020, 2020, 9162464. [Google Scholar] [CrossRef]
  9. Jalloul, R.; Chethan, H.K.; Alkhatib, R. A Review of Machine Learning Techniques for the Classification and Detection of Breast Cancer from Medical Images. Diagnostics 2023, 13, 2460. [Google Scholar] [CrossRef]
  10. Gao, Y.; Lin, J.; Zhou, Y.; Lin, R. The Application of Traditional Machine Learning and Deep Learning Techniques in Mammography: A Review. Front. Oncol. 2023, 13, 1213045. [Google Scholar] [CrossRef]
  11. Abhisheka, B.; Biswas, S.K.; Purkayastha, B. A Comprehensive Review on Breast Cancer Detection, Classification and Segmentation Using Deep Learning. Arch. Comput. Methods Eng. 2023, 30, 5023–5052. [Google Scholar] [CrossRef]
  12. Zebari, D.A.; Ibrahim, D.A.; Zeebaree, D.Q.; Haron, H.; Salih, M.S.; Damaševičius, R.; Mohammed, M.A. Systematic Review of Computing Approaches for Breast Cancer Detection Based Computer Aided Diagnosis Using Mammogram Images. Appl. Artif. Intell. 2021, 35, 2157–2203. [Google Scholar] [CrossRef]
  13. Loizidou, K.; Elia, R.; Pitris, C. Computer-Aided Breast Cancer Detection and Classification in Mammography: A Comprehensive Review. Comput. Biol. Med. 2023, 153, 106554. [Google Scholar] [CrossRef]
  14. Agrawal, S.; Oza, P.; Kakkar, R.; Tanwar, S.; Jetani, V.; Undhad, J.; Singh, A. Analysis and Recommendation System-Based on PRISMA Checklist to Write Systematic Review. Assess. Writ. 2024, 61, 100866. [Google Scholar] [CrossRef]
  15. Sahu, A.; Das, P.K.; Meher, S. Recent Advancements in Machine Learning and Deep Learning-Based Breast Cancer Detection Using Mammograms. Phys. Medica 2023, 114, 103138. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, Z.; Li, M.; Wang, H.; Jiang, H.; Yao, Y.; Zhang, H.; Xin, J. Breast Cancer Detection Using Extreme Learning Machine Based on Feature Fusion with CNN Deep Features. IEEE Access 2019, 7, 105146–105158. [Google Scholar] [CrossRef]
  17. Avcı, H.; Karakaya, J. A Novel Medical Image Enhancement Algorithm for Breast Cancer Detection on Mammography Images Using Machine Learning. Diagnostics 2023, 13, 348. [Google Scholar] [CrossRef]
  18. Ketabi, H.; Ekhlasi, A.; Ahmadi, H. A Computer-Aided Approach for Automatic Detection of Breast Masses in Digital Mammogram via Spectral Clustering and Support Vector Machine. Phys. Eng. Sci. Med. 2021, 44, 277–290. [Google Scholar] [CrossRef]
  19. Sha, Z.; Hu, L.; Rouyendegh, B.D. Deep Learning and Optimization Algorithms for Automatic Breast Cancer Detection. Int. J. Imaging Syst. Technol. 2020, 30, 495–506. [Google Scholar] [CrossRef]
  20. Sannasi Chakravarthy, S.R.; Bharanidharan, N.; Rajaguru, H. Deep Learning-Based Metaheuristic Weighted K-Nearest Neighbor Algorithm for the Severity Classification of Breast Cancer. Irbm 2023, 44, 100749. [Google Scholar] [CrossRef]
  21. Muduli, D.; Dash, R.; Majhi, B. Fast Discrete Curvelet Transform and Modified PSO Based Improved Evolutionary Extreme Learning Machine for Breast Cancer Detection. Biomed. Signal Process. Control 2021, 70, 102919. [Google Scholar] [CrossRef]
  22. Thawkar, S.; Ingolikar, R. Classification of Masses in Digital Mammograms Using Biogeography-Based Optimization Technique. J. King Saud Univ.-Comput. Inf. Sci. 2020, 32, 1140–1148. [Google Scholar] [CrossRef]
  23. Mannarsamy, V.; Mahalingam, P.; Kalivarathan, T.; Amutha, K.; Paulraj, R.K.; Ramasamy, S. Sift-BCD: SIFT-CNN Integrated Machine Learning-Based Breast Cancer Detection. Biomed. Signal Process. Control 2025, 106, 107686. [Google Scholar] [CrossRef]
  24. Mohanty, F.; Rup, S.; Dash, B.; Majhi, B.; Swamy, M.N.S. An Improved Scheme for Digital Mammogram Classification Using Weighted Chaotic Salp Swarm Algorithm-Based Kernel Extreme Learning Machine. Appl. Soft Comput. J. 2020, 91, 106266. [Google Scholar] [CrossRef]
  25. Thawkar, S.; Ingolikar, R. Classification of Masses in Digital Mammograms Using the Genetic Ensemble Method. J. Intell. Syst. 2020, 29, 831–845. [Google Scholar] [CrossRef]
  26. Ragab, D.A.; Sharkas, M.; Attallah, O. Breast Cancer Diagnosis Using an Efficient CAD System Based on Multiple Classifiers. Diagnostics 2019, 9, 165. [Google Scholar] [CrossRef] [PubMed]
  27. Kayode, A.A.; Akande, N.O.; Adegun, A.A.; Adebiyi, M.O. An Automated Mammogram Classification System Using Modified Support Vector Machine. Med. Devices Evid. Res. 2019, 12, 275–284. [Google Scholar] [CrossRef]
  28. Han, B.; Sun, L.; Li, C.; Yu, Z.; Jiang, W.; Liu, W.; Tao, D.; Liu, B. Deep Location Soft-Embedding-Based Network with Regional Scoring for Mammogram Classification. IEEE Trans. Med. Imaging 2024, 43, 3137–3148. [Google Scholar] [CrossRef]
  29. Anas, M.; Haq, I.U.; Husnain, G.; Jaffery, S.A.F. Advancing Breast Cancer Detection: Enhancing YOLOv5 Network for Accurate Classification in Mammogram Images. IEEE Access 2024, 12, 16474–16488. [Google Scholar] [CrossRef]
  30. Liu, W.; Shu, X.; Zhang, L.; Li, D.; Lv, Q. Deep Multiscale Multi-Instance Networks with Regional Scoring for Mammogram Classification. IEEE Trans. Artif. Intell. 2022, 3, 485–496. [Google Scholar] [CrossRef]
  31. Shu, X.; Zhang, L.; Wang, Z.; Lv, Q.; Yi, Z. Deep Neural Networks with Region-Based Pooling Structures for Mammographic Image Classification. IEEE Trans. Med. Imaging 2020, 39, 2246–2255. [Google Scholar] [CrossRef]
  32. Hamed, G.; Marey, M.; Amin, S.; Tolba, M.F. Automated Breast Cancer Detection and Classification in Full Field Digital Mammograms Using Two Full and Cropped Detection Paths Approach. IEEE Access 2021, 9, 116898–116913. [Google Scholar] [CrossRef]
  33. Nasir Khan, H.; Shahid, A.R.; Raza, B.; Dar, A.H.; Alquhayz, H. Multi-View Feature Fusion Based Four Views Model for Mammogram Classification Using Convolutional Neural Network. IEEE Access 2019, 7, 165724–165733. [Google Scholar] [CrossRef]
  34. Le, T.L.; Bui, M.H.; Nguyen, N.C.; Ha, M.T.; Nguyen, A.; Nguyen, H.P. Transfer Learning for Deep Neural Networks-Based Classification of Breast Cancer X-Ray Images. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2024, 12, 2275708. [Google Scholar] [CrossRef]
  35. Basha, A.A.; Vivekanandan, S.; Mubarakali, A.; Alqahtani, A.S. Enhanced Mammogram Classification with Convolutional Neural Network: An Improved Algorithm for Automated Breast Cancer Detection. Measurement 2023, 221, 113551. [Google Scholar] [CrossRef]
  36. Lou, Q.; Li, Y.; Qian, Y.; Lu, F.; Ma, J. Mammogram Classification Based on a Novel Convolutional Neural Network with Efficient Channel Attention. Comput. Biol. Med. 2022, 150, 106082. [Google Scholar] [CrossRef]
  37. Viegas, L.; Domingues, I.; Mendes, M. Study on Data Partition for Delimitation of Masses in Mammography. J. Imaging 2021, 7, 174. [Google Scholar] [CrossRef]
  38. Mohammed, A.D.; Ekmekci, D. Breast Cancer Diagnosis Using YOLO-Based Multiscale Parallel CNN and Flattened Threshold Swish. Appl. Sci. 2024, 14, 2680. [Google Scholar] [CrossRef]
  39. Salh, C.H.; Ali, A.M. Unveiling Breast Tumor Characteristics: A ResNet152V2 and Mask R-CNN Based Approach for Type and Size Recognition in Mammograms. Trait. Du Signal 2023, 40, 1821–1832. [Google Scholar] [CrossRef]
  40. Prodan, M.; Paraschiv, E.; Stanciu, A. Applying Deep Learning Methods for Mammography Analysis and Breast Cancer Detection. Appl. Sci. 2023, 13, 4272. [Google Scholar] [CrossRef]
  41. Kumbhare, S.; B.Kathole, A.; Shinde, S. Federated Learning Aided Breast Cancer Detection with Intelligent Heuristic-Based Deep Learning Framework. Biomed. Signal Process. Control 2023, 86, 105080. [Google Scholar] [CrossRef]
  42. Ayana, G.; Dese, K.; Dereje, Y.; Kebede, Y.; Barki, H.; Amdissa, D.; Husen, N.; Mulugeta, F.; Habtamu, B.; Choe, S.W. Vision-Transformer-Based Transfer Learning for Mammogram Classification. Diagnostics 2023, 13, 178. [Google Scholar] [CrossRef] [PubMed]
  43. Jiang, J.; Peng, J.; Hu, C.; Jian, W.; Wang, X.; Liu, W. Breast Cancer Detection and Classification in Mammogram Using a Three-Stage Deep Learning Framework Based on PAA Algorithm. Artif. Intell. Med. 2022, 134, 102419. [Google Scholar] [CrossRef]
  44. Ibrokhimov, B.; Kang, J.Y. Two-Stage Deep Learning Method for Breast Cancer Detection Using High-Resolution Mammogram Images. Appl. Sci. 2022, 12, 4616. [Google Scholar] [CrossRef]
  45. Adedigba, A.P.; Adeshina, S.A.; Aibinu, A.M. Performance Evaluation of Deep Learning Models on Mammogram Classification Using Small Dataset. Bioengineering 2022, 9, 161. [Google Scholar] [CrossRef] [PubMed]
  46. Alruwaili, M.; Gouda, W. Automated Breast Cancer Detection Models Based on Transfer Learning. Sensors 2022, 22, 876. [Google Scholar] [CrossRef]
  47. Maqsood, S.; Damaševičius, R.; Maskeliunas, R. TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages. Appl. Sci. 2022, 12, 3273. [Google Scholar] [CrossRef]
  48. Montaha, S.; Azam, S.; Rakibul Haque Rafid, A.K.M.; Ghosh, P.; Hasan, M.Z.; Jonkman, M.; De Boer, F. BreastNet18: A High Accuracy Fine-Tuned VGG16 Model Evaluated Using Ablation Study for Diagnosing Breast Cancer from Enhanced Mammography Images. Biology 2021, 10, 1347. [Google Scholar] [CrossRef]
  49. Xie, L.; Zhang, L.; Hu, T.; Huang, H.; Yi, Z. Neural Networks Model Based on an Automated Multi-Scale Method for Mammogram Classification. Knowl. Based Syst. 2020, 208, 106465. [Google Scholar] [CrossRef]
  50. Prinzi, F.; Insalaco, M.; Orlando, A.; Gaglio, S.; Vitabile, S. A Yolo-Based Model for Breast Cancer Detection in Mammograms. Cogn. Comput. 2024, 16, 107–120. [Google Scholar] [CrossRef]
  51. Sathesh Raaj, R. Breast Cancer Detection and Diagnosis Using Hybrid Deep Learning Architecture. Biomed. Signal Process. Control 2023, 82, 104558. [Google Scholar] [CrossRef]
  52. Ahmad, J.; Akram, S.; Jaffar, A.; Rashid, M.; Bhatti, S.M. Breast Cancer Detection Using Deep Learning: An Investigation Using the DDSM Dataset and a Customized AlexNet and Support Vector Machine. IEEE Access 2023, 11, 108386–108397. [Google Scholar] [CrossRef]
  53. Berghouse, M.; Bebis, G.; Tavakkoli, A. Exploring the Influence of Attention for Whole-Image Mammogram Classification. Image Vis. Comput. 2024, 147, 105062. [Google Scholar] [CrossRef]
  54. Yan, F.; Huang, H.; Pedrycz, W.; Hirota, K. Automated Breast Cancer Detection in Mammography Using Ensemble Classifier and Feature Weighting Algorithms. Expert. Syst. Appl. 2023, 227, 120282. [Google Scholar] [CrossRef]
  55. Sureshkumar, V.; Prasad, R.S.N.; Balasubramaniam, S.; Jagannathan, D.; Daniel, J.; Dhanasekaran, S. Breast Cancer Detection and Analytics Using Hybrid CNN and Extreme Learning Machine. J. Pers. Med. 2024, 14, 792. [Google Scholar] [CrossRef] [PubMed]
  56. Kalpana, P.; Selvy, P.T. A Novel Machine Learning Model for Breast Cancer Detection Using Mammogram Images. Med. Biol. Eng. Comput. 2024, 62, 2247–2264. [Google Scholar] [CrossRef] [PubMed]
  57. Chakravarthy, S.; Bharanidharan, N.; Khan, S.B.; Kumar, V.V.; Mahesh, T.R.; Almusharraf, A.; Albalawi, E. Multi-Class Breast Cancer Classification Using CNN Features Hybridization. Int. J. Comput. Intell. Syst. 2024, 17, 191. [Google Scholar] [CrossRef]
  58. Luong, H.H.; Vo, M.D.; Phan, H.P.; Dinh, T.A.; Nguyen, L.Q.T.; Tran, Q.T.; Thai-Nghe, N.; Nguyen, H.T. Improving Breast Cancer Prediction via Progressive Ensemble and Image Enhancement. Multimed. Tools Appl. 2024, 84, 8623–8650. [Google Scholar] [CrossRef]
  59. Huynh, H.N.; Tran, A.T.; Tran, T.N. Region-of-Interest Optimization for Deep-Learning-Based Breast Cancer Detection in Mammograms. Appl. Sci. 2023, 13, 6894. [Google Scholar] [CrossRef]
  60. Azevedo, V.; Silva, C.; Dutra, I. Quantum Transfer Learning for Breast Cancer Detection. Quantum Mach. Intell. 2022, 4, 5. [Google Scholar] [CrossRef]
  61. Lim, T.S.; Tay, K.G.; Huong, A.; Lim, X.Y. Breast Cancer Diagnosis System Using Hybrid Support Vector Machine-Artificial Neural Network. Int. J. Electr. Comput. Eng. 2021, 11, 3059–3069. [Google Scholar] [CrossRef]
  62. Zhang, Y.D.; Satapathy, S.C.; Guttery, D.S.; Górriz, J.M.; Wang, S.H. Improved Breast Cancer Classification Through Combining Graph Convolutional Network and Convolutional Neural Network. Inf. Process Manag. 2021, 58, 102439. [Google Scholar] [CrossRef]
  63. Chouhan, N.; Khan, A.; Shah, J.Z.; Hussnain, M.; Khan, M.W. Deep Convolutional Neural Network and Emotional Learning Based Breast Cancer Detection Using Digital Mammography. Comput. Biol. Med. 2021, 132, 104318. [Google Scholar] [CrossRef] [PubMed]
  64. Altameem, A.; Mahanty, C.; Poonia, R.C.; Saudagar, A.K.J.; Kumar, R. Breast Cancer Detection in Mammography Images Using Deep Convolutional Neural Networks and Fuzzy Ensemble Modeling Techniques. Diagnostics 2022, 12, 1812. [Google Scholar] [CrossRef]
  65. Deshmukh, J.; Bhosle, U. A Study of Mammogram Classification Using AdaBoost with Decision Tree, KNN, SVM and Hybrid SVM-KNN as Component Classifiers. J. Inf. Hiding Multimed. Signal Process. 2018, 9, 548–557. [Google Scholar]
  66. Niranjana, R.; Ravi, A.; Sivadasan, J. Performance Analysis of Novel Hybrid\ Deep Learning Model IEU Net++ for Multiclass Categorization of Breast Mammogram Images. Biomed. Signal Process. Control 2025, 105, 107607. [Google Scholar] [CrossRef]
  67. Muduli, D.; Dash, R.; Majhi, B. Automated Breast Cancer Detection in Digital Mammograms: A Moth Flame Optimization Based ELM Approach. Biomed. Signal Process. Control 2020, 59, 101912. [Google Scholar] [CrossRef]
  68. Vijayarajeswari, R.; Parthasarathy, P.; Vivekanandan, S.; Basha, A.A. Classification of Mammogram for Early Detection of Breast Cancer Using SVM Classifier and Hough Transform. Measurement 2019, 146, 800–805. [Google Scholar] [CrossRef]
  69. Falconi, L.G.; Perez, M.; Aguilar, W.G.; Conci, A. Transfer Learning and Fine Tuning in Breast Mammogram Abnormalities Classification on CBIS-DDSM Database. Adv. Sci. Technol. Eng. Syst. 2020, 5, 154–165. [Google Scholar] [CrossRef]
  70. Logan, J.; Kennedy, P.J.; Catchpoole, D. A Review of the Machine Learning Datasets in Mammography, Their Adherence to the FAIR Principles and the Outlook for the Future. Sci. Data 2023, 10, 595. [Google Scholar] [CrossRef] [PubMed]
  71. Chris, C.; Felipe, K.; George, P.; Jayashree, K.-C.; John, M.; Katherine, A.; Lavender; Maryam, V.; Michelle, R.; Robyn, B.; et al. RSNA Screening Mammography Breast Cancer Detection. Available online: https://kaggle.com/competitions/rsna-breast-cancer-detection (accessed on 24 October 2024).
  72. Chakraborty, S.; Das, H. Performance Analysis of Feature Extraction Techniques for Medical Data Classification. In Proceedings of the Advances in Power Systems and Energy Management; Priyadarshi, N., Padmanaban, S., Ghadai, R.K., Panda, A.R., Patel, R., Eds.; Springer Nature: Singapore, 2021; pp. 387–401. [Google Scholar]
  73. Hassooni, A.J.; Naser, M.A.; Al-Mamory, S.O. A Proposed Method for Feature Extraction to Enhance Classification Algorithms Performance. In Proceedings of the New Trends in Information and Communications Technology Applications; Al-Bakry, A.M., Al-Mamory, S.O., Sahib, M.A., Hasan, H.S., Oreku, G.S., Nayl, T.M., Al-Dhaibani, J.A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 157–166. [Google Scholar]
  74. Mishra, S.; Prakash, M. Digital Mammogram Inferencing System Using Intuitionistic Fuzzy Theory. Comput. Syst. Sci. Eng. 2022, 41, 1099–1115. [Google Scholar] [CrossRef]
  75. Shinde, V.D.; Rao, B.T. A Novel Approach to Mammogram Classification Using Spatio-Temporal and Texture Feature Extraction Using Dictionary Based Sparse Representation Classifier. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 320–332. [Google Scholar] [CrossRef]
  76. Pattnaik, R.K.; Siddique, M.; Mishra, S.; Gelmecha, D.J.; Singh, R.S.; Satapathy, S. Breast Cancer Detection and Classification Using Metaheuristic Optimized Ensemble Extreme Learning Machine. Int. J. Inf. Technol. 2023, 15, 4551–4563. [Google Scholar] [CrossRef]
  77. Gudhe, N.R.; Behravan, H.; Sudah, M.; Okuma, H.; Vanninen, R.; Kosma, V.M.; Mannermaa, A. Area-Based Breast Percentage Density Estimation in Mammograms Using Weight-Adaptive Multitask Learning. Sci. Rep. 2022, 12, 12060. [Google Scholar] [CrossRef] [PubMed]
  78. Thakur, N.; Kumar, P.; Kumar, A. Reinforcement Learning (RL)-Based Semantic Segmentation and Attention Based Backpropagation Convolutional Neural Network (ABB-CNN) for Breast Cancer Identification and Classification Using Mammogram Images. Neural Comput. Appl. 2024, 36, 14797–14823. [Google Scholar] [CrossRef]
  79. Wei, T.; Aviles-Rivero, A.I.; Wang, S.; Huang, Y.; Gilbert, F.J.; Schönlieb, C.B.; Chen, C.W. Beyond Fine-Tuning: Classifying High Resolution Mammograms Using Function-Preserving Transformations. Med. Image Anal. 2022, 82, 102618. [Google Scholar] [CrossRef] [PubMed]
  80. Anari, S.; Sadeghi, S.; Sheikhi, G.; Ranjbarzadeh, R.; Bendechache, M. Explainable Attention Based Breast Tumor Segmentation Using a Combination of UNet, ResNet, DenseNet, and EfficientNet Models. Sci. Rep. 2025, 15, 1027. [Google Scholar] [CrossRef] [PubMed]
  81. Saber, A.; Sakr, M.; Abo-Seida, O.M.; Keshk, A.; Chen, H. A Novel Deep-Learning Model for Automatic Detection and Classification of Breast Cancer Using the Transfer-Learning Technique. IEEE Access 2021, 9, 71194–71209. [Google Scholar] [CrossRef]
  82. Sushanki, S.; Bhandari, A.K.; Singh, A.K. A Review on Computational Methods for Breast Cancer Detection in Ultrasound Images Using Multi-Image Modalities. Arch. Comput. Methods Eng. 2024, 31, 1277–1296. [Google Scholar] [CrossRef]
  83. Sahu, A.; Das, P.K.; Meher, S. An Efficient Deep Learning Scheme to Detect Breast Cancer Using Mammogram and Ultrasound Breast Images. Biomed. Signal Process. Control 2024, 87, 105377. [Google Scholar] [CrossRef]
  84. Atrey, K.; Singh, B.K.; Bodhey, N.K. Integration of Ultrasound and Mammogram for Multimodal Classification of Breast Cancer Using Hybrid Residual Neural Network and Machine Learning. Image Vis. Comput. 2024, 145, 104987. [Google Scholar] [CrossRef]
  85. Xu, C.; Qi, Y.; Wang, Y.; Lou, M.; Pi, J.; Ma, Y. ARF-Net: An Adaptive Receptive Field Network for Breast Mass Segmentation in Whole Mammograms and Ultrasound Images. Biomed. Signal Process. Control 2022, 71, 103178. [Google Scholar] [CrossRef]
  86. Mann, R.M.; Cho, N.; Moy, L. Breast MRI: State of the Art. Radiology 2019, 292, 520–536. [Google Scholar] [CrossRef] [PubMed]
  87. Hussain, S.; Ali, M.; Naseem, U.; Avalos, D.B.A.; Cardona-Huerta, S.; Tamez-Pena, J.G. Multiview Multimodal Feature Fusion for Breast Cancer Classification Using Deep Learning. IEEE Access 2024, 13, 9265–9275. [Google Scholar] [CrossRef]
  88. Qasem, A.; Sahran, S.; Abdullah, S.N.H.S.; Albashish, D.; Hussain, R.I.; Arasaratnam, S. Heterogeneous Ensemble Pruning Based on Bee Algorithm for Mammogram Classification. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 231–239. [Google Scholar] [CrossRef]
  89. Chen, F.Y.; Hsu, Y.J.; Lu, C.H.; Shuai, H.H.; Yeh, L.Y.; Shen, C.Y. Compressing Deep Neural Networks with Goal-Specific Pruning and Self-Distillation. ACM Trans. Knowl. Discov. Data 2025, 19, 1–27. [Google Scholar] [CrossRef]
  90. Dinsdale, N.K.; Jenkinson, M.; Namburete, A.I.L. STAMP: Simultaneous Training and Model Pruning for Low Data Regimes in Medical Image Segmentation. Med. Image Anal. 2022, 81, 102583. [Google Scholar] [CrossRef] [PubMed]
  91. Blott, M.; Fraser, N.J.; Gambardella, G.; Halder, L.; Kath, J.; Neveu, Z.; Umuroglu, Y.; Vasilciuc, A.; Leeser, M.; Doyle, L. Evaluation of Optimized CNNs on Heterogeneous Accelerators Using a Novel Benchmarking Approach. IEEE Trans. Comput. 2021, 70, 1654–1669. [Google Scholar] [CrossRef]
  92. Wang, X.; Liang, G.; Zhang, Y.; Blanton, H.; Bessinger, Z.; Jacobs, N. Inconsistent Performance of Deep Learning Models on Mammogram Classification. J. Am. Coll. Radiol. 2020, 17, 796–803. [Google Scholar] [CrossRef]
  93. Barba, D.; León-Sosa, A.; Lugo, P.; Suquillo, D.; Torres, F.; Surre, F.; Trojman, L.; Caicedo, A. Breast Cancer, Screening and Diagnostic Tools: All You Need to Know. Crit. Rev. Oncol. Hematol. 2021, 157, 103174. [Google Scholar] [CrossRef]
  94. Santos, C.S.; Amorim-Lopes, M. Externally Validated and Clinically Useful Machine Learning Algorithms to Support Patient-Related Decision-Making in Oncology: A Scoping Review. BMC Med. Res. Methodol. 2025, 25, 45. [Google Scholar] [CrossRef]
  95. Zhang, B.; Rigall, E.; Huang, Y.; Zou, X.; Zhang, S.; Dong, J.; Yu, H. A Method for Breast Mass Segmentation Using Image Augmentation with SAM and Receptive Field Expansion. In Proceedings of the ACM International Conference Proceeding Series; Association for Computing Machinery: New York, NY, USA, 2023; pp. 387–394. [Google Scholar]
  96. Wang, Z.; Wu, Z.; Agarwal, D.; Sun, J. MedCLIP: Contrastive Learning from Unpaired Medical Images and Text. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, 7–11 December 2022; Association for Computational Linguistics (ACL): Kerrville, TX, USA, 2022; pp. 3876–3887. [Google Scholar]
Figure 1. Classification of the breast. Green circles indicate benign regions of interest (RoI), and red circles delineate malignant lesions. The four panels illustrate the categories: Normal, Benign, Malignant, and Malignant + Benign.
Figure 1. Classification of the breast. Green circles indicate benign regions of interest (RoI), and red circles delineate malignant lesions. The four panels illustrate the categories: Normal, Benign, Malignant, and Malignant + Benign.
Diagnostics 15 02829 g001
Figure 2. PRISMA flowchart of study selection. Gray bands indicate PRISMA stages; arrows show the flow of records; n = number of records.
Figure 2. PRISMA flowchart of study selection. Gray bands indicate PRISMA stages; arrows show the flow of records; n = number of records.
Diagnostics 15 02829 g002
Figure 3. Taxonomy of classification methods in mammography.
Figure 3. Taxonomy of classification methods in mammography.
Diagnostics 15 02829 g003
Figure 4. ELM-based classification [21]. Adaptedfrom Muduli et al., “Fast discrete curvelet transform and modified PSO based improved evolutionary extreme learning machine for breast cancer detection”, Biomedical Signal Processing and Control, Volume 70, September 2021, 102919. Copyright © 2021 Elsevier. Copyright 2003 Elsevier.
Figure 4. ELM-based classification [21]. Adaptedfrom Muduli et al., “Fast discrete curvelet transform and modified PSO based improved evolutionary extreme learning machine for breast cancer detection”, Biomedical Signal Processing and Control, Volume 70, September 2021, 102919. Copyright © 2021 Elsevier. Copyright 2003 Elsevier.
Diagnostics 15 02829 g004
Figure 5. A YOLO-Based Model for Breast cancer detection [50]. Legend: blue arrows = data split to YOLO-based models (train/val/test); red arrows = training/augmentation flow; green arrows = test/inference flow; dashed red/green = cross-set links; dashed blue box = feature-reduction block; cyan call-out = detected ROI. Adapted from Prinzi et al., “A YOLO-Based Model for Breast Cancer Detection in Mammograms”, Cognitive Computation, Volume 16, pages 107–120, 2024. Copyright © 2023 by the authors. Published by Springer Nature under a Creative Commons Attribution License.
Figure 5. A YOLO-Based Model for Breast cancer detection [50]. Legend: blue arrows = data split to YOLO-based models (train/val/test); red arrows = training/augmentation flow; green arrows = test/inference flow; dashed red/green = cross-set links; dashed blue box = feature-reduction block; cyan call-out = detected ROI. Adapted from Prinzi et al., “A YOLO-Based Model for Breast Cancer Detection in Mammograms”, Cognitive Computation, Volume 16, pages 107–120, 2024. Copyright © 2023 by the authors. Published by Springer Nature under a Creative Commons Attribution License.
Diagnostics 15 02829 g005
Figure 6. The model of CNN combination [57]. Reprinted from Chakravarthy et al., “Multi-class Breast Cancer Classification Using CNN Features Hybridization”, International Journal of Computational Intelligence Systems, 17, 191 (2024). Copyright © 2024 by the authors. Published by Springer Nature under a Creative Commons Attribution 4.0 License.
Figure 6. The model of CNN combination [57]. Reprinted from Chakravarthy et al., “Multi-class Breast Cancer Classification Using CNN Features Hybridization”, International Journal of Computational Intelligence Systems, 17, 191 (2024). Copyright © 2024 by the authors. Published by Springer Nature under a Creative Commons Attribution 4.0 License.
Diagnostics 15 02829 g006
Figure 7. Distribution of classifier.
Figure 7. Distribution of classifier.
Diagnostics 15 02829 g007
Table 1. Machine learning-based classification.
Table 1. Machine learning-based classification.
No.Ref.DatasetsSegmentation MethodFeature Extraction and SelectionClassification MethodsResultLimitation
1[16]400 private mammogram imagesNot applicableMorphological, texture, and density featuresExtreme Learning Machine (ELM)Benign and malignant categories:
Accuracy: 96.2%, sensitivity: 95.8%, and specificity: 96.6%
Manual extraction of features (such as morphological features), which can still be prone to human error or variability based on expert experience.
2[17]Mammographic Image Analysis Society (MIAS)k-means clustering algorithmGray Level Co-occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRLM)SVM, RF, ANN, k-NN, Naive Bayes (NB), DTNormal vs. Abnormal, and benign vs. malignant: SVM, RF, and Neural Networks showed the best performanceReliance on the mini-MIAS dataset, which is relatively small, potentially limiting the generalizability of the results to larger datasets or more diverse populations.
3[18]DDSM datasetSpectral clusteringFeature extraction: GLCM,
Feature selection: Genetic Algorithm (GA)
SVM: classify regions as mass or non-massSensitivity: 89.5%,
Specificity: 91.2%,
Accuracy: 90%
Spectral clustering method may struggle with complex mass boundaries and overlapping tissues, leading to reduced accuracy in highly heterogeneous breast images.
4[19]MIAS, DDSMCNN optimized by the Grasshopper Optimization Algorithm (GOA)Geometric features, texture features, statistical features. GOA is used for feature selectionSVMSensitivity: 96%,
Specificity: 93%,
Accuracy: 92%
The learning time is relatively high due to the large number of iterations required for optimization
5[20]MIAS, INbreastNot applicableFeature extraction: ResNet18 extracts 512 features from each mammogram
Feature selection: PSO, DFOA, CSOA
Weighted K-Nearest Neighbor (wKNN)Benign and malignant classification:
MIAS: accuracy: 84.35% using CSOA-wKNN
INbreast: accuracy: 83.19% using CSOA-wKNN
The computational complexity of metaheuristic algorithms, particularly DFOA, which requires significant parameter tuning and exhibits a slower convergence rate compared to PSO and CSOA.
6[21]MIAS, DDSM, INbreastNot applicableFeature extraction: Fast discrete curvelet transform (FDCT-WRP)
Feature selection: PCA and LDA
Extreme Learning Machine (ELM)Benign vs. malignant classification:
Accuracy achieved on MIAS: 100%.
Accuracy on DDSM: 98.94%.
Accuracy on INbreast: 98.76%.
ELM model’s complexity can be an issue, particularly regarding the computational cost involved in feature extraction and optimization
7[22]DDSMNot applicableFeature Extraction:
Intensity-based Features, texture-based Features, shape-based Features
Feature Selection: Biogeography-Based Optimization (BBO)
Adaptive Neuro-Fuzzy Inference System (ANFIS), ANNBenign vs. malignant classification:
Accuracy: 98.92%
Sensitivity: 99.10%
Specificity: 98.72%
The model was tested on a relatively small dataset, and the study did not explore other advanced classifiers or deeper neural networks
8[23]CBIS-DDSMRoI based U-NetDeep learning-based Scale Invariant Feature Transform (SIFT)Fuzzy Decision Tree (FDT)Normal, Benign, and malignant:
Accuracy: 99.2%, Sensitivity: 95.36%, specificity: 97.41%
Relies on handcrafted feature (SIFT), which may limit performance, generalizability restricted since only tested on CBIS-DDSM.
9[24]MIAS, DDSM, BCDRNot applicableFeature extraction: Discrete Wavelet Packet Transform (DWPT).
Feature selection: PCA and Weighted Chaotic Salp Swarm Algorithm (WC-SSA)
Kernel Extreme Learning Machine (KELM)Normal vs. Abnormal Classification:
MIAS dataset: Accuracy of 99.62%.
DDSM dataset: Accuracy of 99.92%.
Benign vs. Malignant Classification:
MIAS dataset: Accuracy of 99.28%.
DDSM dataset: Accuracy of 99.63%.
The combination of wavelet-based feature extraction and the complex WC-SSA optimization adds computational overhead
10[25]DDSMNot applicableFeature Extraction:
Intensity-based Features, texture-based Features, and shape-based Features
Feature Selection: Genetic Algorithm (GA)
AdaBoost, RF, and DT.Benign and malignant classification:
AdaBoost achieved:
Accuracy: 96.15%.
Random Forest achieved:
Accuracy: 92.70%
The use of Genetic Algorithms, along with ensemble methods, increases the computational complexity of the model
11[26]MIAS, and Digital Mammography DREAM Challenge DatasetNot applicableFeature Extraction: Statistical features.
Feature Selection:
Best First Search
k-NN, Decision Trees (J48, Random Forest, Random Tree).Normal vs. abnormal classification:
Adaboosting J48, Decision Tree, and Random Forest:
Accuracy: 100%.
AUC: 1.000 (MIAS dataset).
Although data augmentation was applied, the number of abnormal samples was still limited compared to normal ones
12[27]MIAS DatasetThresholding-based segmentationFeature Extraction: Gray Level Co-occurrence Matrix (GLCM)
Feature Selection: Genetic Algorithm (GA)
Modified SVMAccuracy: 96.34%.
Sensitivity: 94.28% (ability to correctly detect malignant cases).
The study used a relatively small dataset (322 images), which could limit the generalizability of the results when applied to larger or more diverse datasets
Table 2. Deep learning-based classification approaches.
Table 2. Deep learning-based classification approaches.
No.Ref.DatasetsSegmentation MethodFeature ExtractionClassification MethodsResultLimitation
1[28]CBIS-DDSM, INbreastOtsu ThresholdingCNN: DenseNet-169Deep Location Soft-Embedding-Based Network-Regional Scoring (DLSEN-RS)Benign, and malignant:
INbreast dataset: accuracy: 91.5%.
CBIS-DDSM dataset: accuracy: 89.4%.
A limitation of the DLSEN-RS model is the challenge of determining the optimal k value. If “k” is too small, important information might be missed, and if too large, redundant and irrelevant features may be selected, leading to reduced performance.
2[29]INbreast, CBIS-DDSM, BNSMask R-CNNYOLOv5 and Mask R-CNNMask R-CNNBenign, and malignant:
False Positive Rate (FPR): 0.049%
False Negative Rate (FNR): 0.029%
Computational complexity associated with training both YOLOv5 and Mask R-CNN
3[30]INbreast and HX dataset (private dataset)Otsu segmentationPretrained DenseNet modelDeep Multiscale Multi-Instance NetworksBenign, and malignant:
INbreast dataset: Accuracy: 93.2%.
HX dataset: Accuracy: 0.872
The choice of the k value, which determines the number of patches to be selected for classification. A suboptimal value of k could lead to the inclusion of irrelevant regions, reducing performance.
4[31]INbreast, CBIS-DDSMOtsu segmentationDenseNet, ImageNetDenseNet, ImageNet.Benign, and malignant:
INbreast dataset:
Accuracy: 91.6%
CBIS-DDSM dataset:
Accuracy: 83.9%
Dividing the image into many smaller regions and calculating probabilities for each increases the time and computational resources needed for processing.
5[32]INbreast DatasetNot applicableYOLO-V4ResNet, VGG, and InceptionNet.Benign, and malignant:
Inception-V3 accuracy: 98%
The need for significant computational resources for processing large mammograms and handling cropped slices, which may slow down real-time applications.
6[33]MIAS, CBIS-DDSMNot applicableCNNVarious CNN architectures (VGGNet, ResNet, and GoogLeNet).AUC of 0.932 for mass and calcification and 0.84 for malignant and benign.Computational complexity of fusing multi-view data.
7[34]DDSM, Hanoi Medical University (HMU) datasetNot applicableResNet-34,ResNet-34Normal, benign, and malignant: macAUC of 0.766The availability of annotated mammogram datasets, which restricts the fine-tuning process and may limit the generalization of the model.
8[35]Unknown mammogram datasetMorphological clustering, dilation and erosionGray Level Run Length Matrix (GLRLM)Kernel-Based Convolutional Neural Network (KBCNN)Benign and malignant accuracy: 95%The computational cost is associated with using KBCNN, which may hinder real-time processing.
9[36]INbreast DatasetNot applicableECA-Net50 modelEfficient Channel Attention (ECA-Net50)Benign and malignant categories:
Accuracy: 92.9%.
Precision: 88.3%.
The study relies on the INbreast dataset, which is relatively small compared to large-scale datasets.
10[37]INbreast DatasetMask R-CNN (Region-based Convolutional Neural Network)Not applicableMask R-CNN frameworkBenign and malignant: True Positive Rate (TPR) of 0.936 with a standard deviation of 0.063The small size of the INbreast dataset, which may not provide enough diversity to fully train modern deep learning models like Mask R-CNN.
11[38]CBIS-DDSM, INbreastNot applicableParallel Feature Extraction Stem (PFES), Dense Connection Blocks (DCB) and Inception Blocks (IB)YOLO-based multiscale parallel CNN architecture.INbreast Dataset:
Accuracy: 98.72%
YOLO model can be biased toward smaller lesions due to its default loss function
12[39]510 digital mammogram images collected from Erbil hospital (Zheen)Mask R-CNNResNet152V2CNN: ResNet152V2, and
Mask R-CNN
ResNet152V2: Accuracy: 100% for classifying breast density types and distinguishing normal or abnormal tissue.The difficulty in detecting tumors in extremely dense breasts (types C and D), which may affect the system’s accuracy in these specific cases.
13[40]Radiological Society of North America (RSNA)Not applicableCNN-based architectures (ResNet18, ResNet34, ResNet152, EfficientNetB0, MaxViT)Multiple pre-trained models (ResNet, EfficientNet, and MaxViT)Normal and abnormal categories:
ResNet18:
Accuracy: 94%
EfficientNetB0: Accuracy: 95%
MaxViT:
Accuracy: 89%
The scaling of images to 256 × 256 and 512 × 512 pixels, which might reduce classification performance
14[41]CBIS-DDSM DatasetNot applicableDenseNetEnhanced Recurrent Neural Network (E-RNN)Benign and malignant categories:
Accuracy: 95%
Matthews Correlation Coefficient (MCC): 91%
Difficulty in balancing data across institutions in federated learning setups and the challenge of optimizing communication between clients and the central server.
15[42]DDSM DatasetNot applicableVision transformers (ViT)Vision transformersBenign and malignant categories:
1.00 ± 0 of accuracy
Vision transformers require high computational resources, particularly in large variants like ViT-large.
16[43]CBIS-DDSM, INbreast, and MIASThe Probabilistic Anchor Assignment (PAA) algorithm, an anchor-free object detection approachEfficientNet-B3 as the backbone networkFaster R-CNN and PAAAUC for ROI classifier: 0.889.The non-maximum suppression (NMS) and weighted box fusion (WBF) algorithms used in post-processing may still miss detections in dense breasts.
17[44]INbreast and private datasetSquare small patches (sliding window) are generated from the regionNot applicableFaster R-CNNMean Average Precision (mAP): 0.94Although the model achieves high accuracy, the use of small patches might still miss certain lesions if not appropriately handled during training.
18[45]INbreast DatasetNot applicableDenseNet and AlexNetDenseNet and AlexNetBenign and malignant categories:
DenseNet: accuracy: 99.8%.
AlexNet: accuracy: 98.8%
The study does not address potential biases in the dataset that could arise from its small size and class imbalance.
19[46]MIAS DatasetNot applicableNot applicableNasnet-Mobile: classifying mammographic images into benign or malignant categories.
Modified ResNet50 (MOD-RES): fine-tuned for classifying breast masses
MOD-RES accuracy: 89.5%.
Nasnet-Mobile: accuracy: 70%
The models are trained on a relatively small dataset, which might not generalize well to larger and more diverse populations
20[47]MIAS, DDSM, INbreast DatasetNot applicableCNN models: InceptionResNet-V2, Inception-V3, VGG-16, VGG-19, GoogLeNet, ResNet-18, ResNet-50, and ResNet-101Transferable Texture Convolutional Neural Network (TTCNN)Benign and malignant categories:
DDSM: Accuracy: 99.08%.
INbreast: Accuracy: 96.82%.
MIAS: Accuracy: 96.57%
Though the proposed method shows significant improvement, there is still room for enhancing accuracy, especially in detecting smaller tumors at very early stages.
21[48]CBIS-DDSMNot applicableCNN: VGG16 networkBreastNet18 model, which is a fine-tuned version of VGG16.Benign and malignant:
Training accuracy: 96.72%
Validation accuracy: 97.91%
Test accuracy: 98.02%
The relatively small size of the dataset could lead to overfitting despite data augmentation
22[49]INbreastNot applicableCNN: DenseNet121 and MobileNetCNN: DenseNet121 and MobileNetDenseNet
Accuracy: 96.34%
MobileNet
Accuracy: 95.12%
Although DenseNet provides excellent performance, it is computationally expensive. While MobileNet is more efficient, its performance is slightly lower.
23[50]CBIS-DDSM, INbreast, and private dataset collected from University Hospital “Paolo Giaccone,” Palermo, ItalyYolo-based modelYolo modelYolo-based modelBenign and malignant:
CBIS-DDSM dataset:
mAP (mean Average Precision) of 0.498.
INbreast dataset:
mAP of 0.835
The proprietary dataset is heavily imbalanced, with 82.4% of the lesions being malignant, which may affect the model’s ability to generalize well
Table 3. Hybrid/ensemble-based classification approaches.
Table 3. Hybrid/ensemble-based classification approaches.
No.Ref.DatasetsSegmentation MethodFeature ExtractionClassification MethodsResultLimitation
1[52]DDSM datasetNot applicableCNN: AlexNetBreastNet + SVMBenign, and malignant categories:
Accuracy: 99.16%
Sensitivity: 97.13%
Specificity: 99.30%
The model’s performance may be impacted by the choice of optimizers, with varying results based on hyperparameter tuning.
2[53]CBIS-DDSM, INbreast datasetNot applicableNot applicableAttention mechanisms + CNN architectures (ResNet50, DenseNet169, RegNetX064)CBIS-DDSM dataset: DenseNet169 + Attention Module: AU-ROC of 0.79.
INbreast dataset: DenseNet169 + Squeeze and Excitation (SE): AU-ROC of 0.88
Attention improved performance inconsistently across different datasets and abnormality types. Additionally, complex models with higher pooling tended to overfit on smaller datasets, and certain attention mechanisms such as convolutional bottleneck attention module (CBAM) were harder to optimize.
3[54]MIAS and DDSM datasetThresholding and region growing methodGLCM, shape and margin features.An ensemble classifier model including KNN, bagging, and EigenClass algorithmsNormal, benign, or malignant categories:
DDSM’s accuracy: 93.26%. MIAS’s accuracy: 91%.
Computational complexity due to the ensemble classification and feature weighting algorithms.
4[55]MIAS DatasetA Gabor filterCNN modelCNN + Extreme Learning Machine (ELM)Benign and malignant categories:
The hybrid CNN-ELM model
Accuracy: 86% on the MIAS dataset.
The model was tested on a relatively small dataset (MIAS dataset with 322 images), which may limit its generalizability to larger datasets.
5[56]MIAS, INbreast, BCDR DatasetNot applicableProbabilistic Principal Component Analysis (PPCA)Naïve Bayes + Firefly Binary Grey Optimization (FBGO), Transfer Convolutional Neural Network (TCNN) + Moth Flame Lion Optimization (MMFLO)Benign and malignant categories:
Naïve Bayes + FBGO (MIAS dataset):
Accuracy: 96.3%
TCNN + MMFLO (MIAS dataset):
Accuracy: 98%
The high computational complexity associated with the ensemble model, especially when combining both Naïve Bayes and TCNN models.
6[57]MIAS, INbreast, CBIS-DDSM DatasetNot applicableCNN architecture:
VGG16, VGG19, ResNet50, and DenseNet121
CNN hybrid approach: VGG16, VGG19, ResNet50, DenseNet121.Normal, Benign, and malignant categories:
MIAS’s accuracy: 98.70%
Inbreast’s accuracy: 98.83%.
Computational complexity due to the combination of multiple CNN models. Additionally, there were slight challenges in discriminating against malignant cases compared to normal and benign cases.
7[58]CBIS-DDSM DatasetNot applicablePre-trained models (ResNet-50, EfficientNet-B5, and Xception)ResNet-50 + Xception + EfficientNet-B5.Mass/calcification classification: accuracy: 91.36%. Benign/malignant classification: accuracy: 76.79%.Computational complexity due to the combination of multiple CNN models.
8[59]VinDr-Mammo, DDSM, CMMD, CDD-CESM, BMCD, RSNAYOLOXThe features for classification are extracted using two CNN architectures: EfficientNet and ConvNeXtEfficientNet-B7 + ConvNeXt-101VinDr-Mammo’s accuracy: 90%
CMMD’s accuracy: 92%
BMCD’s accuracy: 92%
The primary limitation of the study lies in the reliance on gradCAM for visualizing the important regions of the ROIs, which may sometimes produce noisy or incomplete heat maps.
9[60]Breast Cancer Digital Repository (BCDR) dataset.Not applicablePre-trained classical neural networks (ResNet18)Combination of pre-trained classical models (ResNet18) + quantum neural networksNormal and abnormal categories:
84% of accuracy.
The study notes that quantum devices are still in the early stages of development (NISQ era), and the results from real quantum devices showed slightly lower accuracy (81%) compared to the quantum simulator (84%).
10[61]MIAS DatasetNot applicableGLCM and statistical featuresA hybrid model combining SVM for the first stage of classification (normal vs. abnormal) and ANN for the second stage (benign vs. malignant)The hybrid SVM + ANN model:
99.4% of accuracy for normal vs. abnormal classification.
The use of only 160 mammograms for training and testing limits the generalizability of the model. The small dataset size could introduce overfitting or bias in the model’s performance.
11[62]MIAS DatasetNot applicableConvolutional Neural Network (CNN) + Graph Convolutional Network (GCN)CNN + Graph Convolutional Network (GCN)Sensitivity: 96.20 ± 2.90%
Specificity: 96.00 ± 2.31%
Accuracy: 96.10 ± 1.60%
The dataset was imbalanced, with fewer abnormal images (113) compared to normal images (209), which may affect the model’s generalizability
12[63]IRMA DatasetNot applicableStatistical Features,
Local Binary Pattern (LBP) Features, Taxonomic Features, and Dynamic Features
SVM + Emotional Learning inspired Ensemble Classifier (ELiEC)Accuracy: 80.54%.While hybrid features improved accuracy, the margin of improvement was around 2–3%, which, while notable, may still leave room for optimization.
13[64]BCDR, MINI-MIAS, DDSM, INbreast DatasetNot applicableDeep convolutional neural networks (CNN): VGG-11, ResNet-164, DenseNet121, and Inception V4Combination of fuzzy ensemble modeling+deep CNNs (VGG-11, ResNet-164, DenseNet121, and Inception V4)Normal, benign, and malignant categories:
accuracy of 99.32%.
The use of multiple CNNs in an ensemble increases computational complexity.
14[65]MIAS, DDSM DatasetNot applicableFeature Extraction: GLCM
Feature Selection: semi-supervised K-means clustering algorithm
AdaBoost combined with multiple base classifiers:
Decision Tree (DT), k-Nearest Neighbors (KNN), Support Vector Machine (SVM), and hybrid SVM-KNN
Benign and malignant classification:
DDSM Dataset:
AdaBoost with Hybrid SVM-KNN: 90.625% of accuracy.
The study relies on manual ROI segmentation, which may limit its applicability in fully automated systems.
15[66]MIAS, INBreast, CBIS-DDSMIEUNet++Not applicableHybrid deep learning IEUNet++Normal, benign, malignant:
INBreast: Accuracy: 99.87%, sensitivity: 99.77%, specificity: 0.998
Computationally expensive due to ensemble of InceptionResNetV2 + EfficientNetB7.
Table 4. Performance ranges of ML, DL, and hybrid methods in breast cancer classification.
Table 4. Performance ranges of ML, DL, and hybrid methods in breast cancer classification.
Method ApproachAccuracy RangeSensitivity RangePrecision RangeSpecificity Range
Machine Learning82.42–100% (normal and abnormal class)86.67–99.1%82.42–83.87%83.33–98.72%
Deep Learning70–100% (normal and abnormal class)X88.3–99.16%X
Hybrid74.96–99.87% (normal, benign, malignant)96.2–99.77%X96–99.8%
Table 5. Representative high-performing models by classification approach.
Table 5. Representative high-performing models by classification approach.
ApproachesDatasetsPre-ProcessingFeature ExtractionFeature SelectionOptimizationClassificationAccuracy
Machine learning [21]MIAS, DDSM, INbreastXFast discrete curvelet transform (FDCT-WRP)Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA)Modified Particle Swarm OptimizationExtreme Learning Machine (ELM)100% (benign and malignant class)
Deep learning [42]DDSMData balancing (augmentation), color jitter, gamma correction, salt and pepper noiseVision transformers and pretrained weights from ImageNetXXVision transformer100% (benign and malignant class)
Hybrid/Ensemble [66]MIAS, CBIS-DDSM, INbreast DatasetSegmentation: IEUNet++XXXInceptionResNet + EfficientNetB7 + UNet (IEUNet++)99.87% (normal, benign, and malignant)
Table 6. Mammogram Datasets.
Table 6. Mammogram Datasets.
DatasetsTotal ImagesLesion TypeImage
Category
Usage in This Review ArticleRemarks
MIAS/MINI-MIAS322All kindNormal, Benign, and Malignant23 studiesClassic, widely used despite small size
DDSM10,480Mass, calcificationNormal, Benign, and Malignant23 studiesOne of the earliest large datasets, image quality is relatively low.
CBIS-DDSM3012Mass, calcificationBenign, Malignant14 studiesCurated, more consistent version of DDSM
INbreast410All kindNormal, Benign, And Malignant21 studiesHigh-quality, pixel-level annotation
BCDR7315All kindNormal, Cancer4 studiesIncludes RoI annotation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fajrin, H.R.; Min, S.D. From Machine Learning to Ensemble Approaches: A Systematic Review of Mammogram Classification Methods. Diagnostics 2025, 15, 2829. https://doi.org/10.3390/diagnostics15222829

AMA Style

Fajrin HR, Min SD. From Machine Learning to Ensemble Approaches: A Systematic Review of Mammogram Classification Methods. Diagnostics. 2025; 15(22):2829. https://doi.org/10.3390/diagnostics15222829

Chicago/Turabian Style

Fajrin, Hanifah Rahmi, and Se Dong Min. 2025. "From Machine Learning to Ensemble Approaches: A Systematic Review of Mammogram Classification Methods" Diagnostics 15, no. 22: 2829. https://doi.org/10.3390/diagnostics15222829

APA Style

Fajrin, H. R., & Min, S. D. (2025). From Machine Learning to Ensemble Approaches: A Systematic Review of Mammogram Classification Methods. Diagnostics, 15(22), 2829. https://doi.org/10.3390/diagnostics15222829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop