Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = diagnostic mammogram

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 8171 KiB  
Article
Breast Cancer Image Classification Using Phase Features and Deep Ensemble Models
by Edgar Omar Molina Molina and Victor H. Diaz-Ramirez
Appl. Sci. 2025, 15(14), 7879; https://doi.org/10.3390/app15147879 - 15 Jul 2025
Viewed by 389
Abstract
Breast cancer is a leading cause of mortality among women worldwide. Early detection is crucial for increasing patient survival rates. Artificial intelligence, particularly convolutional neural networks (CNNs), has enabled the development of effective diagnostic systems by digitally processing mammograms. CNNs have been widely [...] Read more.
Breast cancer is a leading cause of mortality among women worldwide. Early detection is crucial for increasing patient survival rates. Artificial intelligence, particularly convolutional neural networks (CNNs), has enabled the development of effective diagnostic systems by digitally processing mammograms. CNNs have been widely used for the classification of breast cancer in images, obtaining accurate results similar in many cases to those of medical specialists. This work presents a hybrid feature extraction approach for breast cancer detection that employs variants of EfficientNetV2 network and convenient image representation based on phase features. First, a region of interest (ROI) is extracted from the mammogram. Next, a three-channel image is created using the local phase, amplitude, and orientation features of the ROI. A feature vector is constructed for the processed mammogram using the developed CNN model. The size of the feature vector is reduced using simple statistics, achieving a redundancy suppression of 99.65%. The reduced feature vector is classified as either malignant or benign using a classifier ensemble. Experimental results using a training/testing ratio of 70/30 on 15,506 mammography images from three datasets produced an accuracy of 86.28%, a precision of 78.75%, a recall of 86.14%, and an F1-score of 80.09% with the modified EfficientNetV2 model and stacking classifier. However, an accuracy of 93.47%, a precision of 87.61%, a recall of 93.19%, and an F1-score of 90.32% were obtained using only CSAW-M dataset images. Full article
(This article belongs to the Special Issue Object Detection and Image Processing Based on Computer Vision)
Show Figures

Figure 1

18 pages, 3741 KiB  
Article
Optimizing Artificial Intelligence Thresholds for Mammographic Lesion Detection: A Retrospective Study on Diagnostic Performance and Radiologist–Artificial Intelligence Discordance
by Taesun Han, Hyesun Yun, Young Keun Sur and Heeboong Park
Diagnostics 2025, 15(11), 1368; https://doi.org/10.3390/diagnostics15111368 - 29 May 2025
Viewed by 539
Abstract
Background/Objectives: Artificial intelligence (AI)-based systems are increasingly being used to assist radiologists in detecting breast cancer on mammograms. However, applying fixed AI score thresholds across diverse lesion types may compromise diagnostic performance, especially in women with dense breasts. This study aimed to determine [...] Read more.
Background/Objectives: Artificial intelligence (AI)-based systems are increasingly being used to assist radiologists in detecting breast cancer on mammograms. However, applying fixed AI score thresholds across diverse lesion types may compromise diagnostic performance, especially in women with dense breasts. This study aimed to determine optimal category-specific AI thresholds and to analyze discrepancies between AI predictions and radiologist assessments, particularly for BI-RADS 4A versus 4B/4C lesions. Methods: We retrospectively analyzed 194 mammograms (76 BI-RADS 4A and 118 BI-RADS 4B/4C) using FDA-approved AI software. Lesion characteristics, breast density, AI scores, and pathology results were collected. A receiver operating characteristic (ROC) analysis was conducted to determine the optimal thresholds via Youden’s index. Discrepancy analysis focused on BI-RADS 4A lesions with AI scores of ≥35 and BI-RADS 4B/4C lesions with AI scores of <35. Results: AI scores were significantly higher in malignant versus benign cases (72.1 vs. 20.9; p < 0.001). The optimal AI threshold was 19 for BI-RADS 4A (AUC = 0.685) and 63 for BI-RADS 4B/4C (AUC = 0.908). In discordant cases, BI-RADS 4A lesions with scores of ≥35 had a malignancy rate of 43.8%, while BI-RADS 4B/4C lesions with scores of <35 had a malignancy rate of 19.5%. Conclusions: Using category-specific AI thresholds improves diagnostic accuracy and supports radiologist decision-making. However, limitations persist in BI-RADS 4A cases with overlapping scores, reinforcing the need for radiologist oversight and tailored AI integration strategies in clinical practice. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

30 pages, 1229 KiB  
Article
Multi-Scale Vision Transformer with Optimized Feature Fusion for Mammographic Breast Cancer Classification
by Soaad Ahmed, Naira Elazab, Mostafa M. El-Gayar, Mohammed Elmogy and Yasser M. Fouda
Diagnostics 2025, 15(11), 1361; https://doi.org/10.3390/diagnostics15111361 - 28 May 2025
Viewed by 800
Abstract
Background: Breast cancer remains one of the leading causes of mortality among women worldwide, highlighting the critical need for accurate and efficient diagnostic methods. Methods: Traditional deep learning models often struggle with feature redundancy, suboptimal feature fusion, and inefficient selection of [...] Read more.
Background: Breast cancer remains one of the leading causes of mortality among women worldwide, highlighting the critical need for accurate and efficient diagnostic methods. Methods: Traditional deep learning models often struggle with feature redundancy, suboptimal feature fusion, and inefficient selection of discriminative features, leading to limitations in classification performance. To address these challenges, we propose a new deep learning framework that leverages MAX-ViT for multi-scale feature extraction, ensuring robust and hierarchical representation learning. A gated attention fusion module (GAFM) is introduced to dynamically integrate the extracted features, enhancing the discriminative power of the fused representation. Additionally, we employ Harris Hawks optimization (HHO) for feature selection, reducing redundancy and improving classification efficiency. Finally, XGBoost is utilized for classification, taking advantage of its strong generalization capabilities. Results: We evaluate our model on the King Abdulaziz University Mammogram Dataset, categorized based on BI-RADS classifications. Experimental results demonstrate the effectiveness of our approach, achieving 98.2% for accuracy, 98.0% for precision, 98.1% for recall, 98.0% for F1-score, 98.9% for the area under the curve (AUC), and 95% for the Matthews correlation coefficient (MCC), outperforming existing state-of-the-art models. Conclusions: These results validate the robustness of our fusion-based framework in improving breast cancer diagnosis and classification. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

16 pages, 1870 KiB  
Article
Artificial Intelligence as a Potential Tool for Predicting Surgical Margin Status in Early Breast Cancer Using Mammographic Specimen Images
by David Andras, Radu Alexandru Ilies, Victor Esanu, Stefan Agoston, Tudor Florin Marginean Jumate and George Calin Dindelegan
Diagnostics 2025, 15(10), 1276; https://doi.org/10.3390/diagnostics15101276 - 17 May 2025
Viewed by 1279
Abstract
Background/Objectives: Breast cancer is the most common malignancy among women globally, with an increasing incidence, particularly in younger populations. Achieving complete surgical excision is essential to reduce recurrence. Artificial intelligence (AI), including large language models like ChatGPT, has potential for supporting diagnostic [...] Read more.
Background/Objectives: Breast cancer is the most common malignancy among women globally, with an increasing incidence, particularly in younger populations. Achieving complete surgical excision is essential to reduce recurrence. Artificial intelligence (AI), including large language models like ChatGPT, has potential for supporting diagnostic tasks, though its role in surgical oncology remains limited. Methods: This retrospective study evaluated ChatGPT’s performance (ChatGPT-4, OpenAI, March 2025) in predicting surgical margin status (R0 or R1) based on intraoperative mammograms of lumpectomy specimens. AI-generated responses were compared with histopathological findings. Performance was evaluated using sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), F1 score, and Cohen’s kappa coefficient. Results: Out of a total of 100 patients, ChatGPT achieved an accuracy of 84.0% in predicting surgical margin status. Sensitivity for identifying R1 cases (incomplete excision) was 60.0%, while specificity for R0 (complete excision) was 86.7%. The positive predictive value (PPV) was 33.3%, and the negative predictive value (NPV) was 95.1%. The F1 score for R1 classification was 0.43, and Cohen’s kappa coefficient was 0.34, indicating moderate agreement with histopathological findings. Conclusions: ChatGPT demonstrated moderate accuracy in confirming complete excision but showed limited reliability in identifying incomplete margins. While promising, these findings emphasize the need for domain-specific training and further validation before such models can be implemented in clinical breast cancer workflows. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

13 pages, 1852 KiB  
Article
The Impact of Automatic Exposure Control Technology on the In Vivo Radiation Dose in Digital Mammography: A Comparison Between Different Systems and Target/Filter Combinations
by Ahmad A. Alhulail, Salman M. Albeshan, Mohammed S. Alshuhri, Essam M. Alkhybari, Mansour A. Almanaa, Haitham Alahmad, Khaled Alenazi, Abdulaziz S. Alshabibi, Mohammed Alsufayan, Saleh A. Alsulaiman, Maha M. Almuqbil, Mahmoud M. Elsharkawi and Sultan Alghamdi
Diagnostics 2025, 15(10), 1185; https://doi.org/10.3390/diagnostics15101185 - 8 May 2025
Viewed by 905
Abstract
Background/Objectives: Digital mammography is widely used for breast cancer screening; however, variations in system design and automatic exposure control (AEC) strategies can lead to significant differences in radiation dose, potentially affecting the diagnostic quality and patient safety. In this study, we aimed [...] Read more.
Background/Objectives: Digital mammography is widely used for breast cancer screening; however, variations in system design and automatic exposure control (AEC) strategies can lead to significant differences in radiation dose, potentially affecting the diagnostic quality and patient safety. In this study, we aimed to determine the effect of various mammographic technologies on the in vivo mean glandular doses (MGDs) that are received in clinical settings. Methods: The MGDs and applied acquisition parameters from 194,608 mammograms, acquired employing AEC using different digital mammography systems (GE, Siemens, and two different models of Hologic), were retrospectively collected. The potential variation in MGD resulting from different technologies (system and target/filter combination) was assessed employing the Kruskal–Wallis test, followed by Dunn’s post hoc. The AEC optimization of acquisition parameters (kVp, mAs) within each system was investigated through a multi-regression analysis as a function of the compressed breast thickness (CBT). The trend line of these parameters in addition to the MGD and source-to-breast distance were also plotted and compared. Results: There were significant variations in delivered doses per CBT based on which technology was used (p < 0.001). The regression analyses revealed system-specific differences in AEC adjustments of mAs and kVp in response to CBT changes. As the CBT increases, the MGD increases with different degrees, rates, and patterns across systems due to differences in AEC strategies. Conclusions: The MGD is affected by the applied technology, which is different between systems. Clinicians need to be aware of these variations and how they affect the MGD. Additionally, manufacturers may need to consider standardizing the implemented technology effects on the MGDs. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

24 pages, 7554 KiB  
Article
Comparative Evaluation of Machine Learning-Based Radiomics and Deep Learning for Breast Lesion Classification in Mammography
by Alessandro Stefano, Fabiano Bini, Eleonora Giovagnoli, Mariangela Dimarco, Nicolò Lauciello, Daniela Narbonese, Giovanni Pasini, Franco Marinozzi, Giorgio Russo and Ildebrando D’Angelo
Diagnostics 2025, 15(8), 953; https://doi.org/10.3390/diagnostics15080953 - 9 Apr 2025
Cited by 1 | Viewed by 1192
Abstract
Background: Breast cancer is the second leading cause of cancer-related mortality among women, accounting for 12% of cases. Early diagnosis, based on the identification of radiological features, such as masses and microcalcifications in mammograms, is crucial for reducing mortality rates. However, manual interpretation [...] Read more.
Background: Breast cancer is the second leading cause of cancer-related mortality among women, accounting for 12% of cases. Early diagnosis, based on the identification of radiological features, such as masses and microcalcifications in mammograms, is crucial for reducing mortality rates. However, manual interpretation by radiologists is complex and subject to variability, emphasizing the need for automated diagnostic tools to enhance accuracy and efficiency. This study compares a radiomics workflow based on machine learning (ML) with a deep learning (DL) approach for classifying breast lesions as benign or malignant. Methods: matRadiomics was used to extract radiomics features from mammographic images of 1219 patients from the CBIS-DDSM public database, including 581 cases of microcalcifications and 638 of masses. Among the ML models, a linear discriminant analysis (LDA) demonstrated the best performance for both lesion types. External validation was conducted on a private dataset of 222 images to evaluate generalizability to an independent cohort. Additionally, a deep learning approach based on the EfficientNetB6 model was employed for comparison. Results: The LDA model achieved a mean validation AUC of 68.28% for microcalcifications and 61.53% for masses. In the external validation, AUC values of 66.9% and 61.5% were obtained, respectively. In contrast, the EfficientNetB6 model demonstrated superior performance, achieving an AUC of 81.52% for microcalcifications and 76.24% for masses, highlighting the potential of DL for improved diagnostic accuracy. Conclusions: This study underscores the limitations of ML-based radiomics in breast cancer diagnosis. Deep learning proves to be a more effective approach, offering enhanced accuracy and supporting clinicians in improving patient management. Full article
(This article belongs to the Special Issue Updates on Breast Cancer: Diagnosis and Management)
Show Figures

Figure 1

24 pages, 10760 KiB  
Article
Evolution of an Artificial Intelligence-Powered Application for Mammography
by Yuriy Vasilev, Denis Rumyantsev, Anton Vladzymyrskyy, Olga Omelyanskaya, Lev Pestrenin, Igor Shulkin, Evgeniy Nikitin, Artem Kapninskiy and Kirill Arzamasov
Diagnostics 2025, 15(7), 822; https://doi.org/10.3390/diagnostics15070822 - 24 Mar 2025
Viewed by 943
Abstract
Background: The implementation of radiological artificial intelligence (AI) solutions remains challenging due to limitations in existing testing methodologies. This study assesses the efficacy of a comprehensive methodology for performance testing and monitoring of commercial-grade mammographic AI models. Methods: We utilized a combination of [...] Read more.
Background: The implementation of radiological artificial intelligence (AI) solutions remains challenging due to limitations in existing testing methodologies. This study assesses the efficacy of a comprehensive methodology for performance testing and monitoring of commercial-grade mammographic AI models. Methods: We utilized a combination of retrospective and prospective multicenter approaches to evaluate a neural network based on the Faster R-CNN architecture with a ResNet-50 backbone, trained on a dataset of 3641 mammograms. The methodology encompassed functional and calibration testing, coupled with routine technical and clinical monitoring. Feedback from testers and radiologists was relayed to the developers, who made updates to the AI model. The test dataset comprised 112 medical organizations, representing 10 manufacturers of mammography equipment and encompassing 593,365 studies. The evaluation metrics included the area under the curve (AUC), accuracy, sensitivity, specificity, technical defects, and clinical assessment scores. Results: The results demonstrated significant enhancement in the AI model’s performance through collaborative efforts among developers, testers, and radiologists. Notable improvements included functionality, diagnostic accuracy, and technical stability. Specifically, the AUC rose by 24.7% (from 0.73 to 0.91), the accuracy improved by 15.6% (from 0.77 to 0.89), sensitivity grew by 37.1% (from 0.62 to 0.85), and specificity increased by 10.7% (from 0.84 to 0.93). The average proportion of technical defects declined from 9.0% to 1.0%, while the clinical assessment score improved from 63.4 to 72.0. Following 2 years and 9 months of testing, the AI solution was integrated into the compulsory health insurance system. Conclusions: The multi-stage, lifecycle-based testing methodology demonstrated substantial potential in software enhancement and integration into clinical practice. Key elements of this methodology include robust functional and diagnostic requirements, continuous testing and updates, systematic feedback collection from testers and radiologists, and prospective monitoring. Full article
(This article belongs to the Special Issue Advances in Breast Radiology)
Show Figures

Figure 1

12 pages, 926 KiB  
Article
Establishing Diagnostic Reference Levels for Mammography Digital Breast Tomosynthesis, Contrast Enhance, Implants, Spot Compression, Magnification and Stereotactic Biopsy in Dubai Health Sector
by Entesar Z. Dalah, Maryam K. Alkaabi, Nisha A. Antony and Hashim M. Al-Awadhi
J. Imaging 2025, 11(3), 79; https://doi.org/10.3390/jimaging11030079 - 7 Mar 2025
Viewed by 991
Abstract
The aim of this patient dose review is to establish a thorough diagnostic reference level (DRL) system. This entails calculating a DRL value for each possible image technique/view considered to perform a diagnostic mammogram in our practice. Diagnostic mammographies from a total of [...] Read more.
The aim of this patient dose review is to establish a thorough diagnostic reference level (DRL) system. This entails calculating a DRL value for each possible image technique/view considered to perform a diagnostic mammogram in our practice. Diagnostic mammographies from a total of 1191 patients who underwent a diagnostic mammogram study in our designated diagnostic mammography center were collected and retrospectively analyzed. The DRL representing our health sector was set as the median of the mean glandular dose (MGD) for each possible image technique/view, including the 2D standard bilateral craniocaudal (LCC/RCC) and mediolateral oblique (LMLO/RMLO), the 2D bilateral spot compression CC and MLO (RSCC/LSCC and RSMLO/LSMLO), the 2D bilateral spot compression with magnification (RMSCC/LMSCC and RMSMLO/LMSMLO), the 3D digital breast tomosynthesis CC and MLO (RCC/LCC and RMLO/LMLO), the 2D bilateral implant CC and MLO (RIMCC/LIMCC and RIMMLO/LIMMLO), the 2D bilateral contrast enhanced CC and MLO (RCECC/LCECC and RCEMLO/LCEMLO) and the 2D bilateral stereotactic biopsy guided CC (SBRCC/SBLCC). This patient dose review revealed that the highest MGD was associated with the 2D bilateral spot compression with magnification (MSCC/MSMLO) image view. For the compressed breast thickness (CBT) group 60–69 mm, the median and 75th percentile of the MGD values obtained were MSCC: 3.35 and 3.96, MSMLO: 4.14 and 5.25 mGy respectively. Obvious MGD variations were witnessed across the different possible views even for the same CBT group. Our results are in line with the published DRLs when using same statistical quantity and CBT group. Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
Show Figures

Figure 1

36 pages, 25720 KiB  
Article
Early Breast Cancer Detection Based on Deep Learning: An Ensemble Approach Applied to Mammograms
by Youness Khourdifi, Alae El Alami, Mounia Zaydi, Yassine Maleh and Omar Er-Remyly
BioMedInformatics 2024, 4(4), 2338-2373; https://doi.org/10.3390/biomedinformatics4040127 - 13 Dec 2024
Cited by 4 | Viewed by 3881
Abstract
Background: Breast cancer is one of the leading causes of death in women, making early detection through mammography crucial for improving survival rates. However, human interpretation of mammograms is often prone to diagnostic errors. This study addresses the challenge of improving the [...] Read more.
Background: Breast cancer is one of the leading causes of death in women, making early detection through mammography crucial for improving survival rates. However, human interpretation of mammograms is often prone to diagnostic errors. This study addresses the challenge of improving the accuracy of breast cancer detection by leveraging advanced machine learning techniques. Methods: We propose an extended ensemble deep learning model that integrates three state-of-the-art convolutional neural network (CNN) architectures: VGG16, DenseNet121, and InceptionV3. The model utilizes multi-scale feature extraction to enhance the detection of both benign and malignant masses in mammograms. This ensemble approach is evaluated on two benchmark datasets: INbreast and CBIS-DDSM. Results: The proposed ensemble model achieved significant performance improvements. On the INbreast dataset, the ensemble model attained an accuracy of 90.1%, recall of 88.3%, and an F1-score of 89.1%. For the CBIS-DDSM dataset, the model reached 89.5% accuracy and 90.2% specificity. The ensemble method outperformed each individual CNN model, reducing both false positives and false negatives, thereby providing more reliable diagnostic results. Conclusions: The ensemble deep learning model demonstrated strong potential as a decision support tool for radiologists, offering more accurate and earlier detection of breast cancer. By leveraging the complementary strengths of multiple CNN architectures, this approach can improve clinical decision making and enhance the accessibility of high-quality breast cancer screening. Full article
(This article belongs to the Topic Computational Intelligence and Bioinformatics (CIB))
Show Figures

Figure 1

30 pages, 11462 KiB  
Article
Revealing Occult Malignancies in Mammograms Through GAN-Driven Breast Density Transformation
by Dionysios Anyfantis, Athanasios Koutras, George Apostolopoulos and Ioanna Christoyianni
Electronics 2024, 13(23), 4826; https://doi.org/10.3390/electronics13234826 - 6 Dec 2024
Viewed by 1163
Abstract
Breast cancer remains one of the primary causes of cancer-related deaths among women globally. Early detection via mammography is essential for improving prognosis and survival rates. However, mammogram diagnostic accuracy is severely hindered by dense breast tissue, which can obstruct potential malignancies, complicating [...] Read more.
Breast cancer remains one of the primary causes of cancer-related deaths among women globally. Early detection via mammography is essential for improving prognosis and survival rates. However, mammogram diagnostic accuracy is severely hindered by dense breast tissue, which can obstruct potential malignancies, complicating early detection. To tackle this pressing issue, this study introduces an innovative approach that leverages Generative Adversarial Networks (GANs), specifically CycleGAN and GANHopper, to transform breast density in mammograms. The aim is to diminish the masking effect of dense tissue, thus enhancing the visibility of underlying malignancies. The method uses unsupervised image-to-image translation to gradually alter breast density (from high (ACR-D) to low (ACR-A)) in mammographic images, detecting obscured lesions while preserving original diagnostic features. We applied this approach to multiple mammographic datasets, demonstrating its effectiveness in diverse contexts. Experimental results exhibit substantial improvements in detecting potential malignancies concealed by dense breast tissue. The method significantly improved precision, recall, and F1-score metrics across all datasets, revealing previously obscured malignancies and image quality assessments confirmed the diagnostic relevance of transformed images. The study introduces a novel mammogram analysis method using advanced machine-learning techniques, enhancing diagnostic accuracy in dense breasts and potentially improving early breast cancer detection and patient outcomes. Full article
Show Figures

Figure 1

14 pages, 2039 KiB  
Article
Deep Learning Based Breast Cancer Detection Using Decision Fusion
by Doğu Manalı, Hasan Demirel and Alaa Eleyan
Computers 2024, 13(11), 294; https://doi.org/10.3390/computers13110294 - 14 Nov 2024
Cited by 4 | Viewed by 3067
Abstract
Breast cancer, which has the highest mortality and morbidity rates among diseases affecting women, poses a significant threat to their lives and health. Early diagnosis is crucial for effective treatment. Recent advancements in artificial intelligence have enabled innovative techniques for early breast cancer [...] Read more.
Breast cancer, which has the highest mortality and morbidity rates among diseases affecting women, poses a significant threat to their lives and health. Early diagnosis is crucial for effective treatment. Recent advancements in artificial intelligence have enabled innovative techniques for early breast cancer detection. Convolutional neural networks (CNNs) and support vector machines (SVMs) have been used in computer-aided diagnosis (CAD) systems to identify breast tumors from mammograms. However, existing methods often face challenges in accuracy and reliability across diverse diagnostic scenarios. This paper proposes a three parallel channel artificial intelligence-based system. First, SVM distinguishes between different tumor types using local binary pattern (LBP) features. Second, a pre-trained CNN extracts features, and SVM identifies potential tumors. Third, a newly developed CNN is trained and used to classify mammogram images. Finally, a decision fusion that combines results from the three channels to enhance system performance is implemented using different rules. The proposed decision fusion-based system outperforms state-of-the-art alternatives with an overall accuracy of 99.1% using the product rule. Full article
Show Figures

Figure 1

14 pages, 2568 KiB  
Article
Efficacy of Mammographic Artificial Intelligence-Based Computer-Aided Detection in Predicting Pathologic Complete Response to Neoadjuvant Chemotherapy
by Ga Eun Park, Bong Joo Kang, Sung Hun Kim and Han Song Mun
Life 2024, 14(11), 1449; https://doi.org/10.3390/life14111449 - 8 Nov 2024
Viewed by 1503
Abstract
This study evaluates the potential of an AI-based computer-aided detection (AI-CAD) system in digital mammography for predicting pathologic complete response (pCR) in breast cancer patients after neoadjuvant chemotherapy (NAC). A retrospective analysis of 132 patients who underwent NAC and surgery between January 2020 [...] Read more.
This study evaluates the potential of an AI-based computer-aided detection (AI-CAD) system in digital mammography for predicting pathologic complete response (pCR) in breast cancer patients after neoadjuvant chemotherapy (NAC). A retrospective analysis of 132 patients who underwent NAC and surgery between January 2020 and December 2022 was performed. Pre- and post-NAC mammograms were analyzed using conventional CAD and AI-CAD systems, with negative exams defined by the absence of marked abnormalities. Two radiologists reviewed mammography, ultrasound, MRI, and diffusion-weighted imaging (DWI). Concordance rates between CAD and AI-CAD were calculated, and the diagnostic performance, including the area under the receiver operating characteristics curve (AUC), was assessed. The pre-NAC concordance rates were 90.9% for CAD and 97% for AI-CAD, while post-NAC rates were 88.6% for CAD and 89.4% for AI-CAD. The MRI had the highest diagnostic performance for pCR prediction, with AI-CAD performing comparably to other modalities. Univariate analysis identified significant predictors of pCR, including AI-CAD, mammography, ultrasound, MRI, histologic grade, ER, PR, HER2, and Ki-67. In multivariable analysis, negative MRI, histologic grade 3, and HER2 positivity remained significant predictors. In conclusion, this study demonstrates that AI-CAD in digital mammography shows the potential to examine the pCR of breast cancer patients following NAC. Full article
Show Figures

Figure 1

20 pages, 3672 KiB  
Article
Grad-CAM Enabled Breast Cancer Classification with a 3D Inception-ResNet V2: Empowering Radiologists with Explainable Insights
by Fatma M. Talaat, Samah A. Gamel, Rana Mohamed El-Balka, Mohamed Shehata and Hanaa ZainEldin
Cancers 2024, 16(21), 3668; https://doi.org/10.3390/cancers16213668 - 30 Oct 2024
Cited by 8 | Viewed by 2782
Abstract
Breast cancer (BCa) poses a severe threat to women’s health worldwide as it is the most frequently diagnosed type of cancer and the primary cause of death for female patients. The biopsy procedure remains the gold standard for accurate and effective diagnosis of [...] Read more.
Breast cancer (BCa) poses a severe threat to women’s health worldwide as it is the most frequently diagnosed type of cancer and the primary cause of death for female patients. The biopsy procedure remains the gold standard for accurate and effective diagnosis of BCa. However, its adverse effects, such as invasiveness, bleeding, infection, and reporting time, keep this procedure as a last resort for diagnosis. A mammogram is considered the routine noninvasive imaging-based procedure for diagnosing BCa, mitigating the need for biopsies; however, it might be prone to subjectivity depending on the radiologist’s experience. Therefore, we propose a novel, mammogram image-based BCa explainable AI (BCaXAI) model with a deep learning-based framework for precise, noninvasive, objective, and timely manner diagnosis of BCa. The proposed BCaXAI leverages the Inception-ResNet V2 architecture, where the integration of explainable AI components, such as Grad-CAM, provides radiologists with valuable visual insights into the model’s decision-making process, fostering trust and confidence in the AI-based system. Based on using the DDSM and CBIS-DDSM mammogram datasets, BCaXAI achieved exceptional performance, surpassing traditional models such as ResNet50 and VGG16. The model demonstrated superior accuracy (98.53%), recall (98.53%), precision (98.40%), F1-score (98.43%), and AUROC (0.9933), highlighting its effectiveness in distinguishing between benign and malignant cases. These promising results could alleviate the diagnostic subjectivity that might arise as a result of the experience-variability between different radiologists, as well as minimize the need for repetitive biopsy procedures. Full article
(This article belongs to the Special Issue Artificial Intelligence-Assisted Radiomics in Cancer)
Show Figures

Figure 1

20 pages, 2032 KiB  
Article
CSA-Net: Channel and Spatial Attention-Based Network for Mammogram and Ultrasound Image Classification
by Osama Bin Naeem and Yasir Saleem
J. Imaging 2024, 10(10), 256; https://doi.org/10.3390/jimaging10100256 - 16 Oct 2024
Cited by 1 | Viewed by 1906
Abstract
Breast cancer persists as a critical global health concern, emphasizing the advancement of reliable diagnostic strategies to improve patient survival rates. To address this challenge, a computer-aided diagnostic methodology for breast cancer classification is proposed. An architecture that incorporates a pre-trained EfficientNet-B0 model [...] Read more.
Breast cancer persists as a critical global health concern, emphasizing the advancement of reliable diagnostic strategies to improve patient survival rates. To address this challenge, a computer-aided diagnostic methodology for breast cancer classification is proposed. An architecture that incorporates a pre-trained EfficientNet-B0 model along with channel and spatial attention mechanisms is employed. The efficiency of leveraging attention mechanisms for breast cancer classification is investigated here. The proposed model demonstrates commendable performance in classification tasks, particularly showing significant improvements upon integrating attention mechanisms. Furthermore, this model demonstrates versatility across various imaging modalities, as demonstrated by its robust performance in classifying breast lesions, not only in mammograms but also in ultrasound images during cross-modality evaluation. It has achieved accuracy of 99.9% for binary classification using the mammogram dataset and 92.3% accuracy on the cross-modality multi-class dataset. The experimental results emphasize the superiority of our proposed method over the current state-of-the-art approaches for breast cancer classification. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

18 pages, 547 KiB  
Article
Evaluating Real World Health System Resource Utilization and Costs for a Risk-Based Breast Cancer Screening Approach in the Canadian PERSPECTIVE Integration and Implementation Project
by Soo-Jin Seung, Nicole Mittmann, Zharmaine Ante, Ning Liu, Kristina M. Blackmore, Emilie S. Richard, Anisia Wong, Meghan J. Walker, Craig C. Earle, Jacques Simard and Anna M. Chiarelli
Cancers 2024, 16(18), 3189; https://doi.org/10.3390/cancers16183189 - 18 Sep 2024
Cited by 3 | Viewed by 2041
Abstract
Background: A prospective cohort study was undertaken within the PERSPECTIVE I&I project to evaluate healthcare resource utilization and costs associated with breast cancer risk assessment and screening and overall costs stratified by risk level, in Ontario, Canada. Methods: From July 2019 to December [...] Read more.
Background: A prospective cohort study was undertaken within the PERSPECTIVE I&I project to evaluate healthcare resource utilization and costs associated with breast cancer risk assessment and screening and overall costs stratified by risk level, in Ontario, Canada. Methods: From July 2019 to December 2022, 1997 females aged 50 to 70 years consented to risk assessment and received their breast cancer risk level and personalized screening action plan in Ontario. The mean costs for risk-stratified screening-related activities included risk assessment, screening and diagnostic costs. The GETCOST macro from the Institute of Clinical Evaluative Sciences (ICES) assessed the mean overall healthcare system costs. Results: For the 1997 participants, 83.3%, 14.4% and 2.3% were estimated to be average, higher than average, and high risk, respectively (median age (IQR): 60 [56–64] years). Stratification into the three risk levels was determined using the validated multifactorial CanRisk prediction tool that includes family history information, a polygenic risk score (PRS), breast density and established lifestyle/hormonal risk factors. The mean number of genetic counseling visits, mammograms and MRIs per individual increased with risk level. High-risk participants incurred the highest overall mean risk-stratified screening-related costs in 2022 CAD (±SD) at CAD 905 (±269) followed by CAD 580 (±192) and CAD 521 (±163) for higher-than-average and average-risk participants, respectively. Among the breast screening-related costs, the greatest cost burden across all risk groups was the risk assessment cost, followed by total diagnostic and screening costs. The mean overall healthcare cost per participant (±SD) was the highest for the average risk participants with CAD 6311 (±19,641), followed by higher than average risk with CAD 5391 (±8325) and high risk with CAD 5169 (±7676). Conclusion: Although high-risk participants incurred the highest risk-stratified screening-related costs, their costs for overall healthcare utilization costs were similar to other risk levels. Our study underscored the importance of integrating risk stratification as part of the screening pathway to support breast cancer detection at an earlier and more treatable stage, thereby reducing costs and the overall burden on the healthcare system. Full article
(This article belongs to the Special Issue Disparities in Cancer Prevention, Screening, Diagnosis and Management)
Show Figures

Figure 1

Back to TopTop