Next Article in Journal
A Novel Registration Method for a Mixed Reality Navigation System Based on a Laser Crosshair Simulator: A Technical Note
Previous Article in Journal
Detection of Image Artifacts Using Improved Cascade Region-Based CNN for Quality Assessment of Endoscopic Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Review of Computer-Aided Models for Breast Cancer Diagnosis Using Histopathology Images

by
Alberto Labrada
1 and
Buket D. Barkana
2,*
1
Department of Electrical Engineering, The University of Bridgeport, Bridgeport, CT 06604, USA
2
Department of Biomedical Engineering, The University of Akron, Akron, OH 44325, USA
*
Author to whom correspondence should be addressed.
Bioengineering 2023, 10(11), 1289; https://doi.org/10.3390/bioengineering10111289
Submission received: 2 September 2023 / Revised: 20 October 2023 / Accepted: 25 October 2023 / Published: 7 November 2023
(This article belongs to the Topic Machine Learning and Biomedical Sensors)

Abstract

:
Breast cancer is the second most common cancer in women who are mainly middle-aged and older. The American Cancer Society reported that the average risk of developing breast cancer sometime in their life is about 13%, and this incident rate has increased by 0.5% per year in recent years. A biopsy is done when screening tests and imaging results show suspicious breast changes. Advancements in computer-aided system capabilities and performance have fueled research using histopathology images in cancer diagnosis. Advances in machine learning and deep neural networks have tremendously increased the number of studies developing computerized detection and classification models. The dataset-dependent nature and trial-and-error approach of the deep networks’ performance produced varying results in the literature. This work comprehensively reviews the studies published between 2010 and 2022 regarding commonly used public-domain datasets and methodologies used in preprocessing, segmentation, feature engineering, machine-learning approaches, classifiers, and performance metrics.

1. Introduction

Breast cancer is projected to account for 1 in 3 new female cancers yearly in the United States (US) [1]. The survival rate for breast cancer is measured in 5-year intervals, considered relative survival rates, and does not consider the cause of death. The ACS study reported that the 5-year survival rate is 90%, the 10-year survival rate is 84%, and the 15-year survival rate is 80% [2]. According to the American Cancer Society (ACS) in the US, an estimated 287,850 new cases of invasive breast cancer will be diagnosed in women in 2022. In addition, 51,400 new cases of ductal carcinoma in situ (DCIS) are expected to be diagnosed, and approximately 43,250 deaths would occur in US women [1]. The World Health Organization (WHO) reported that breast cancer accounted for 12% of all new annual cancer cases worldwide and had become the most common form of cancer diagnosed globally as of 2021 [3]. The latest statistics state that an estimated 684,996 women died of breast cancer worldwide in 2020, and 2,261,419 new breast cancer cases were diagnosed worldwide in 2020 [4].
With technological advancements and healthcare systems, breast cancer survival rates have increased. Many variables can affect the survival rate of someone diagnosed with breast cancer. Most importantly, an early diagnosis can immensely increase the chances of survival. Recent technological advances have allowed for computer-aided detection methods to assist in diagnosing this form of cancer. The systems and tools commonly incorporated into cancer diagnosis are mammograms, ultrasounds, magnetic resonance imaging (MRI), and histopathology images.
Especially in the last five years, most CAD systems have been designed using supervised machine learning models via deep neural networks. While deep networks have many advantages, they have limitations and drawbacks. Database quality and size, high computational cost, overfitting, and black-box approach-related challenges must be improved and better understood. Breast cancer has become one of the most frequently studied fields. Comprehensive review papers evaluating recent works are valuable in presenting, comparing, and discussing the impacts of those works and forecasting future trends.
The existing review papers in the literature on breast cancer and histopathology focused on image analysis methodologies [5], epidemiology, risk factors, classification, detection, markers, and treatment strategies [6,7,8], a combination of different image modalities [9], cut-off levels of Ki-67 [10,11], and only deep neural network models [12,13,14]. The work in [15] surveyed the trends for breast cancer CAD systems but did not cover all stages of CAD systems. Our review exclusively focuses on all stages of CAD systems modeled for breast cancer using histopathology images. We reviewed each stage by reporting and evaluating the developed and/or employed techniques since 2012.

1.1. Scope of the Review

This review aims to compile data about the CAD systems using breast histopathology images, including datasets, preprocessing, segmentation, feature engineering, classification, and performance metrics. The primary purpose of the review is to seek answers to the following questions:
(a)
Which histopathological image datasets are widely used in breast CAD systems?
(b)
What are the preprocessing methods and their impact on the CAD systems?
(c)
What are the employed segmentation and feature extraction methods?
(d)
What are the most common performance metrics used?
(e)
What are the trending methodologies and associated challenges in the field?

1.2. Article Selection Criteria

We searched the most significant works in the literature between 2010 and 2022 by using the following keywords: {Histopathology}; {Breast cancer}; {Image analysis}; {Image processing}; {Histopathological image analysis}; {Computer-assisted Diagnosis}; {Digital pathology}; {Nuclei segmentation}; {Breast histopathology images}; {Automatic image classification}; {Breast biopsy}; {Histopathology image segmentation}; {Image classification}; {Carcinoma cancer}; {Breast cancer screening techniques}; {Medical image processing}; {Breast cancer detection}; {Computer-aided diagnosis (CAD)}; {Computer vision}; {Image recognition}; {Medical image classification}; {Pattern recognition and classification}; {Invasive ductal carcinoma prediction}; {Mitotic cell count}; {Digital pathology}; {Bioinformatics}; {Computational biology}. Peer-reviewed journal and conference papers were collected from credible search engines, including PubMed, IEEE, Elsevier, Wiley, Springer, etc. Only the studies with the following keywords: {breast cancer + histopathology + detection}, {breast cancer + histopathology + classification}, {breast cancer + histopathology + diagnosis}, {breast cancer + histopathology + segmentation} were considered in the review. Furthermore, we excluded the articles if they appeared in multiple sources, had a pure medical science focus, did not propose a CAD system, had no reported results, or were review papers. Figure 1a depicts the number of studies conducted between 2010 and 2022. There has been a significant increase in research work over the years. In 2022, the number of studies per year almost doubled. The PRISM (preferred reporting items for systematic reviews and meta-analyses) guidelines [16] for the review are presented in Figure 1b.
We summarized the most used breast histopathology image datasets in Section 3. The preprocessing methods employed in the reviewed works were explained in Section 4. Segmentation and feature engineering algorithms were presented in Section 5 and Section 6. Classification methods and performance metrics were covered in Section 7 and Section 8. Figure 2 presents the organization of our review process.

2. Basics and Background

Mammography techniques have been a diagnostic tool since the 1960s, and the ACS has officially recommended them since 1976 [17]. A mammogram uses low-dose amplitude X-rays to examine the breast [18]. The X-rays are part of a screening process and typically involve several breast X-rays. Mammograms show tumors and microcalcifications that may indicate cancer [18]. Mammography has aided in decreasing the mortality rate in women with breast cancer by 25–30% compared to a control group spanning 5 to 7 years [19]. It is reported that the doses of radiation required to produce mammograms are considerably low [18].
The use of ultrasounds in breast imaging dates back to 1951, when Wild and Neal described the characteristics of two breast tumors, one benign and one malignant, in an intact human breast [20]. In breast ultrasounds, sound waves are used, and their echoes can construct computer representations of the inside of a breast. A transducer device moves over the skin and sends sound waves that bounce off breast tissue. The transducer then picks up the reflected sound waves and uses them to construct 2D images. They can detect changes in the breast, such as fluid-filled cysts [21,22]. The efficacy of ultrasound screening alone in asymptomatic women will likely cause false positive and negative results. Therefore, a mammogram with an automated whole breast ultrasound (AWBU) is better in cases of dense-breasted women. According to a study by Kelly et al., 87% of cancer detections aided by AWBU were found in 68% of studies of women with dense breasts [19,23]. Ultrasounds can also be used in breast cancer detection, specifically when guiding a biopsy needle into a region of interest in the breast for cells to be taken out and tested for cancer. Unlike mammograms, an ultrasound introduces minimal risk to a patient because it does not expose a person to radiation [22].
The breast MRI was first brought into use in the late 1980s. According to a study in 1986 by Heywang et al., preliminary results indicated that an MRI of breasts using gadolinium administration showed increased enhancement relative to normal breast tissue [24]. In an MRI, the hydrogen nucleus, abundant in water and fat, is used for imaging. The magnetic property of the nucleus is used in conjunction with radio waves and strong magnets, creating a detailed picture of the inside of a breast [19,25]. Breast MRI is typically used for women at high risk for breast cancer. It is usually paired with a mammogram because an MRI alone can miss specific cancers that can be found with a mammogram. Once cancer has been diagnosed, a breast MRI can be done to help localize the cancer, determine its exact size, and look for other tumors in the breast. Unlike mammograms, an MRI uses strong magnets instead of radiation to make detailed cross-sectional pictures of the body by taking pictures from different angles. Therefore, there is no exposure to radiation during this procedure [25].
The first breast biopsies were performed in the 1850s and 1860s by Skey, Sir James Paget, and John Eric Erichsen [26]. A biopsy involves a physician removing small pieces of breast tissue from areas of interest that can be further analyzed in a laboratory to determine if cancer cells are present [27]. A breast biopsy is usually ordered to check a palpable lump or mass, examine a problem seen on a mammogram, and determine whether a breast lump or mass is either malignant or benign [28]. The diagnoses are carried out by pathologists looking at histopathology images and examining them for signs indicating benign or malignant cancer. Biopsy extraction techniques are ultrasound-guided, mammographic-stereotactic-guided, magnetic resonance-guided, fine-needle aspiration, core needle, vacuum-assisted core, and surgical biopsy [28,29]. See Figure 3.
Examining many histopathological images is cumbersome and time-intensive for pathologists, and it can result in a certain margin of human error. Due to these reasons, computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems help assist physicians and experts in increasing the success rate of the analysis/diagnosis. The role of a CADe system focuses on the localization of a specific object or region of interest (ROI), as the particular area of interest is specific to the task. In the case of breast cancer research, detection will be geared specifically towards the nuclei present in a histopathology image, which will then be segmented to make up the ROIs in the images. The CADx systems can extract and analyze features in segmented images and use classifiers to measure and distinguish between benignity and malignancy [30].

3. Histopathology Image Datasets

The breast cancer histopathological image classification (BreakHis), the Kaggle breast cancer histopathology images dataset, the ICIAR 2018 grand challenge on breast cancer histology images (BACH) dataset, the tumor proliferation assessment challenge 2016 (TUPAC16), the MITOS-ATYPIA-14 challenge, and the international conference on pattern recognition (ICPR 2012) dataset are the most widely used datasets in the literature. Table 1 lists the datasets and their URLs.

3.1. The BreakHis Dataset

The BreakHis dataset was built with the P&D Laboratory for Pathological Anatomy and Cytopathology in Parana, Brazil. Tissue samples that comprise this dataset were generated from breast biopsy slides collected by the surgical (open) biopsy method. Then the extracted samples were stained with hematoxylin and eosin (H&E) [31]. The 9109 microscopic images, 5240 malignant and 2480 benign, were sampled from 82 patients of different magnifications (40×, 100×, 200×, and 400×) [32]. “The dataset currently contains four distinct histological types of benign breast tumors: adenosis (A), fibroadenoma (F), phyllodes tumor (PT), and tubular adenoma (TA); and four malignant tumors (breast cancer): carcinoma (DC), lobular carcinoma (LC), mucinous carcinoma (MC), and papillary carcinoma (PC) [32]”.
Most of the CAD systems are modeled using the 400× dataset. Table 2 portrays the dataset distribution by magnification for the BreakHis dataset.

3.2. The Kaggle Breast Cancer

The Kaggle histopathology images dataset is a commonly sourced dataset for breast cancer research consisting of benign and malignant IDC cases. The dataset comprises 162 whole-mount slide images of breast cancer samples with a magnification of 40×. A total of 277,524 patches were sectioned out from the entire mount slide images, each with a size of 50 × 50. A total of 198,738 images in the dataset test negative for IDC, and 78,786 images test positive (Table 3). The images used for this dataset were each associated with a patient ID and label marked by pathologists that indicated whether the patient was positive or negative for IDC [33].

3.3. The ICIAR 2018 Grand Challenge on Breast Cancer Histology Images (BACH) Dataset

The BACH dataset is widely used in breast cancer research and was organized to promote methods for automatically classifying breast cancer biopsies [34]. A collection of 400 labeled H&E-stained breast histology microscopy images and ten pixel-wise labeled and 30 non-labeled whole-slide images make up this database. Expert pathologists annotated the microscopy images from the Institute of Molecular Pathology and Immunology of the University of Porto and the Institute for Research and Innovation in Health. Whole-slide images were annotated by a pathologist and revised by a second expert [34]. In Table 4, microscopy images are classified as follows: 100 normal, 100 benign, 100 in situ carcinomas, and 100 invasive carcinomas [35].

3.4. The TUPAC16 Dataset

The TUPAC16 set consists of 821 whole-slide images from the Cancer Genome Atlas (TCGA) network (Table 5). The images are randomly separated into 500 for training and 321 for testing. Two types of tumor proliferation data are available for the images, including a mitotic score involving a manual count of mitosis occurrences performed by a pathologist and a PAM50 proliferation score based on molecular data [36,37].

3.5. The MITOS-ATYPIA-14 Dataset

The MITOS-ATYPIS-14 set was constructed for mitosis detection (mitotic count) and the evaluation of nuclear atypia (nuclear pleomorphism), which are essential parameters for diagnosing breast cancer [38]. The set of biopsy slides for this dataset is stained with H&E and was annotated by Frédérique Capron, head of the Pathology Department at Pitié-Salpêtrière Hospital in Paris, France. Several regions at 20× magnification were selected and used for scoring nuclear atypia (atypia scores of 1, 2, and 3) within the slides. Scores 1, 2, and 3 denote low, moderate, and high-grade atypia. Then, the 20× regions were divided into four frames at 40× magnification and used to annotate the mitotic figures to arrive at a mitotic count for the image. The dataset consists of 284 frames at 20× magnification and 1136 frames at 40× magnification (Table 6) [38]. The dimensions of the frames are also provided in the dataset: Aperio Scanscope XT and Hamamatsu Nanozoomer 2.0 HT.

3.6. The ICPR 2012 Dataset

The ICPR 2012 dataset was provided by Professor Frédérique Capron’s team in the pathology department at Pitié-Salpêtrière Hospital in Paris, France. Five slides of breast cancer were stained with H&E and scanned using three different pieces of equipment: the Aperio ScanScope XT slide scanner (ASXT), the Hamamatsu NanoZoomer 2.0-HT slide scanner (HNZ), and the ten-band multispectral microscope (MSM) [39]. The miotic figures in the image were annotated manually by a pathologist. Using five different slides scanned at 40× magnification, ten high power fields (HPF) per slide make up 50 HPFs comprising the dataset. The total number of mitotic cells in the 50 HPF for both scanners is 326 and 322 mitotic cells using the multispectral microscope (Table 7) [39].

4. Preprocessing Methods

The preprocessing stage for any work is considered one of the essential stages for a body of work after image acquisition. Raw images may not adequately portray the specific features of interest to the research. Therefore, one of the goals of the preprocessing stage is to make the region of interest more suitable for analysis. Normalization, data augmentation, digital filters, and histogram equalization are commonly used preprocessing techniques.

4.1. Normalization

Normalization techniques play a significant role in preprocessing as they adjust image attributes. The normalization techniques include stain color normalization, global contrast normalization, and illuminant normalization [40,41,42,43,44,45,46,47,48]. When dealing with H&E-stained images, in particular, the variability in the appearance of the images can affect the algorithms’ performance. These irregularities can come from the tissue preparation and staining processes used by different labs, including but not limited to the antigen concentration, incubation time and temperature, and slide digitization conditions, including differences in optics, light detectors, or light detectors used in the scanners [5,49]. Kashyap et al. utilized stain normalization to deal with the volatile expression of H&E images that exhibited the same malignancy level. The process improved the contrast and brightness of the images using a contrast-limited adaptive histogram equalization method without compromising any of the information from the image [40]. Figure 4 shows images with various stain colors and illuminations.
Noumah et al. adapted the Vahadane method as a preprocessing stage to solve the stain variability issue with the BreakHis dataset. The technique was advantageous because it allowed for the transformation of one image into another while preserving the color values of the original image.
Furthermore, it preserved biological structure information by modeling stain density maps based on non-negativity, sparsity, and soft classification [41]. Vo et al. used a logarithmic transformation technique to compute the image’s optical density, followed by the singular value decomposition method (SVD) on the optical image density image to estimate relevant degrees of freedom and construct a 2D projection matrix. This method transformed images into a common space and reduced inconsistencies [45]. Kausar et al. proposed a stain color normalization technique that condensed the stain variations using stain vectors and concentration maps. Stain normalization and color deconvolution were applied to the target, training, and testing images. Using the averages of the target stain vector and concentration map, they constructed a normalization function, allowing the color distribution of the training and testing images to be mapped onto the target image [46].

4.2. Data Augmentation

Data augmentation is used to increase the size of image datasets as machine learning (ML) algorithms require large datasets for training [40,41,42,43,44,45,46,48,50,51,52]. Some of the more commonly used methods for data augmentation involve image transformations and color modifications. Image transformations can include rotations, reflections, scaling, and shearing. Color modifications include but are not limited to histogram equalization, enhancing contrast or brightness, white balancing, sharpening, and blurring [53].
A recent study used data augmentation to increase the number of images. To overcome the overfitting, they integrated the BreakHis and BreCaHAD datasets and performed data augmentation to obtain a more robust dataset using flipping, rotating, shifting, resizing, and gamma correction. The scaling factors used on the images were 0.5×, 0.8×, and 1.2× for each image. Horizontal and vertical transformations generated images with 40-, 80-, 120-, and 180-degree rotations. After applying 19 parameters to 7909 sample images, the number of images was increased to 153,349 total images. The study reported the evaluation metrics for the original and augmented datasets. The accuracy achieved using the augmented datasets was reported to be about 5% and 3% higher than the original BreakHis and BreCaHAD datasets, respectively [40].
Noumah et al. implemented data augmentation methods to expand the training data set size by using random zoom augmentation with a value 2, random rotation augmentation with a value of 90°, and horizontal and vertical flip augmentation [4]. Boumaraf et al. classified histopathological breast cancer images via a magnification-dependent and independent-based approach in 2021. Using a three-fold data augmentation method, the training set was artificially tripled in size by employing three random transformations: a random horizontal flip, a random vertical flip, and a random rotation with 40 degrees [42].
Kate and Shukla introduced a novel method for the automatic classification of histopathological images of breast cancer using the deep learning model ImageNet. The scarcity of data was overcome by implementing different geometric transformations to train this deep learning network properly. The size of the training set was tripled by using random transformations such as random vertical flips, random horizontal flips, and random rotations [43].
Hameed et al. classified breast cancer histopathology images using an ensemble of deep-learning models. Batches of tensor image data were generated using the ImageDataGenerator Keras deep learning library provided while implementing real-time data augmentation. Images that were administered to the generator were transformed by a manner of random translations and rotations. The random rotation is specified by a rotation range between [−40 and 40] degrees. Also implemented into these transformations was a width and height shift, where the image was shifted either up or down or between left and right. If, for a reason, a rotation of the image caused pixels from the original image to become out of frame, a ‘reflect mode’ was used to fill the empty pixels [44].
Vo et al. increased the amount of training data by implementing data augmentation techniques. Performing geometric augmentations, which included reflecting, randomly cropping, rotating, and translating the images, were among the changes made to the existing images [45]. Kausar et al. implemented data augmentation techniques to increase 500 H&E-stained images to 16,575 images. Morphology and color invariances were achieved by rotating, scaling, elastic deformation, and channel color modification techniques [46]. Rakhlin et al. performed 50 random color augmentations on each image and downscaled the images in half to 1024 × 768 pixels from the original size. The downscaled images were cropped down to 400 × 400 and 650 × 650 pixels [48]. Romano et al. augmented images by using a random rotation range of 0 to 20 degrees along with width and height shifts ranging from a fraction of 0.20 to the total width or height of the image. The alterations also consisted of random horizontal and vertical flips [50]. Chang et al. applied augmentation techniques, including rotating the images by 90, 180, and 270 degrees and mirroring and randomly distorting images. The original dataset was augmented to 11,184 images from 1398 images in the original dataset [51].
Yari et al. applied deep learning techniques to arrive at a diagnosis for breast cancer. By implementing data augmentation techniques, they were able to boost the CAD system’s performance. This was arrived at by first resizing the images to 224 × 224 pixels, randomly flipping some horizontally, and randomly rotating and cropping some images. Color jitter for images is also used to change the tone of the original color based on hue, saturation, and value [52].

4.3. Digital Filters

Digital filters are designed to reduce or remove noise and artifacts in an image [54,55,56]. Hirra et al. classified histopathological images using patch-based deep-learning modeling. Using a Gaussian filter with a fixed kernel size, they could control the smoothness of the images and reduce the weight of blurring pixels [54]. Vaka et al. detected cancer by leveraging machine learning. As part of the preprocessing phase, a Gaussian filter was used for noise removal [55]. A study out of Jalpaiguri Government Engineering College in India used a deep residual neural network to detect breast cancer in histopathology images. The Gaussian blur algorithm was used for the denoising of images with low resolutions to reduce regions specifically affected by the noise [56].

4.4. Histogram Equalization

Histogram equalization and logarithmic transformation are also widely used preprocessing techniques [45,46,57,58]. Narayanan et al. used a convolutional neural network to classify histopathology images. Histogram equalization was used to adjust the image’s contrast by using its histogram and served as a method to study the algorithm’s performance. In addition to histogram equalization, a color constancy method was applied to the image to input the convolutional layers [57]. Vo et al. used logarithmic transformation to compute the optical density for each histology image to implement the method of stain normalization [45]. Kausar et al. used a wavelet transform, which was used to decompose the images into a set of frames that relayed important information about the spatial and frequency characteristics of the images [46]. Jiang et al. applied histogram equalization to the images after performing a color space transform on the images. A logarithmic transformation was also used to convert the image’s color to an optical density [58].

5. Segmentation Methods

Image segmentation is a technique for dividing a digital image into segments, which can simplify further processing or analysis of the image. It involves assigning labels to pixels to identify objects, people, or other stages. It is commonly used in object detection, where an algorithm finds objects of interest in an image. The object detector then operates on a bounding box defined by the segmentation algorithm, improving accuracy and reducing inference time. Image segmentation is a key building block of computer vision technologies and algorithms. It is used for many practical applications, including medical image analysis, autonomous vehicles, face recognition, video surveillance, and satellite image analysis [59]. In breast cancer research, segmentation plays an important role, especially when segmenting the nuclei, because extracted features can indicate whether the cells in the histopathology image are undergoing mitosis. However, segmentation of histopathological images is a challenging task because of the varying characteristics of the images, including the magnification factor, resolution, and image quality.
Methods applied in this field for segmentation include but are not limited to adversarial learning, K-Mean clustering, deep convolutional networks, wavelet decomposition, and Fuzzy C-Mean [34,55,60,61,62,63,64,65,66].
Lin et al. used adversarial learning with data selection for segmenting breast cancer in histopathological images. One segmentor and two discriminators comprise the adversarial learning framework. The segmentor generates segmentation outputs for the source and target domains, while the discriminator distinguishes whether the outputs are from the source domain or the target domain. The Deeplab_V2 structure was used as a segmentation network, with ResNet101 as the basis. The atrous spatial pyramid pooling (ASPP) module was used to encode multiscale information in feature maps in conjunction with an un-sampling layer with softmax output responsible for up-sampling the output to the input dimensions. The segmentation network is optimized using the segmentor and discriminators trained simultaneously [60].
Li et al. focused their research on the classification of breast histopathology images with a ductal instance-oriented pipeline, which consisted of a duct-level instance segmentation model, a tissue-level semantic segmentation model, and three levels of features for diagnostic classification. The process for this segmentation begins by feeding the input ROI to the duct-level and tissue-level segmentation modules to produce instances of both the duct-level and tissue-level segmentation masks. For the instance segmentation network, after the ROI has produced the duct candidates, they are classified based on whether or not they are ducts, and a bounding box is also constructed along with a pixel-wise mask of the duct. An off-the-shelf segmentation network was applied for semantic segmentation that splits the input image into non-overlapping regions. It can predict a segmentation mask for the different regions using a resolution encoder-decoder structure [61].
Tan et al. proposed an automated framework that quantifies tumor regions using a spatial neighborhood intensity constraint (SNIC) clustering approach and fuzzy C-mean (FCM). As part of the clustering stage, centroids of the FCM are generated based on domain knowledge using knowledge-based initial centroids selection. This process reduces the search space and limitations of conventional FCM, such as dead center and center redundancy. The function of the SNIC is to eliminate the nucleus cells from the image while preserving the information and eliminating the fuzziness of the image. The K-mean clustering algorithm uses the cyan channel to segment the nucleus cell. Then, it is used as a mask to remove the pixels of the nucleus cells in the RGB images by setting the RGB intensity values of each pixel corresponding to the nucleus to 0 [62].
Sebai et al. employed partially supervised semantic segmentation for mitosis detection by using two-stream fully convolutional networks consisting of a large, weakly annotated mitosis dataset and a small, fully labeled mitosis dataset. The score maps of the two FCNs were fused to obtain more accurate mitosis detection. The fusion was followed by integrating an easy-to-train weight transfer function that allowed for the transfer of semantic knowledge from the segmentation branch trained with weak labels to another semantic segmentation branch trained with strong labels [63].
Priego-Torres et al. used a deep convolutional network to segment H&E-stained histopathology images automatically. Their method involved processing the whole-slide images into various patches and applying a deep convolutional neural network with an encoder-decoder with a separable atrous convolution architecture to the image patches. A fully connected conditional random field is then used to combine the local segmentation tiles while avoiding discontinuities and inconsistencies [64]. Vaka et al. proposed leveraging machine learning to aid in breast cancer detection. Their methodology produced better-quality images using a new deep neural network with a support value method. After removing the noise and extracting features from the preprocessed images, the breast tumors are segmented using histo-sigmoid-based fuzzy clustering [55].
Belsare et al. implemented a spatial-color-texture-based graph partition method to segment histopathology images. The spatial-color-based superpixel image representation is generated using a distance-based similarity function; then, the histology image and breast duct are partitioned using a texture classifier. Finally, the final segmented image is obtained using a graph portioning method in computer vision [65]. Wang et al. built a system to segment and classify nuclei in breast cancer histopathology images automatically. The CADx system initially performed a bottom-hat transform on the grayscale image to enhance the contrast between the cell nuclei and the background. The image’s ROIs are obtained using wavelet decomposition and multiscale region growth. Applying adaptive mathematical morphology and curvature scale space as part of a double strategy splitting model allows overlapping cells to be split for better accuracy and robustness [66].
Histograms and thresholding methods are other popular segmentation methods used in histopathology images. Kaushal and Singla [67] computed the energy curve to obtain tending thresholds and later evaluated the entropies for each tending threshold to find the best thresholding. The segmented regions were processed through morphological operations as a post-process. They reported the advantages of their work as incorporating spatial information, no prior setting of any initial parameter, magnification independence, and automatic determination of the inputs for morphological operations. A recent study developed a segmentation method to extract the morphological characteristics of lymphocytes [68] precisely. It differs from other studies by using the same network, called the dense dual-task network (DDTNet), for detecting and segmenting the lymphocyte. It reported compatible performance compared to state-of-the-art methodologies. For instance, DDTNet outperformed some known networks, including U-Net and HoVer-Net. The study reported a limitation since the detection and segmentation methods were bound to have the same errors as the traditional models. The robustness of the model is not yet generalized since the work was evaluated on small datasets.
Wahab et al. [69] employed an off-the-shelf, pre-trained deep CNN for the segmentation of mitosis. They used skip connections and demonstrated their effectiveness in fully convolutional networks as mitoses. Transfer learning-based mitosis segmentation (TL-Mit-Seg) was applied to the preprocessed images. Stain normalization, annotation, and cropping were applied to the raw images. To overcome the class imbalance, transfer learning produced a ratio of 1:12 on the validation set. The work did not use undersampling to solve the dataset imbalance problem since it might cause data loss. Skip connections were used in the residual learning, serving two purposes: reducing the effects of vanishing gradients and improving the spatial resolution of the segmented image.

6. Feature Engineering Methods

The feature engineering process is an integral part of CADx systems’ design. In breast cancer histopathology research, the engineered features are done with careful consideration of what distinguishes a cancer cell from a normal cell.
Rehman et al. used three different feature vector sets to distinguish between classes. The first feature vector set has 87 features and was useful in discriminating malignant cases based on the pattern difference. Information can be collected about the overall pattern by focusing on the texture of the whole patch. The second feature vector set carries a total of 28 features. This feature set was focused on determining the development stage of the nucleus, specifically by examining the circular shape around the nucleus and noting its irregular or regular circular shape. The third feature vector set had three features and focused on statistical features that could represent the pattern variation in each patch. Different cells in the image belonging to another class will exhibit variations in the pattern histograms. The mean value, peak, and variance can be extracted from the different histograms [70].
Kashyap et al. proposed a multiscale stochastic dilated convolution model capable of enhancing small and low-level features like edge, contour, and color. They could also remove redundant and similar features in the model that made the process more complex by using a series of linear operations on each intrinsic feature to generate ghost features [40].
The authors in [71] used parallel ‘same’ and ‘valid’ convolutional blocks (PSV-CB) to combine two forms of feature coding. An operational flow is made up of several ‘same’ convolutions and followed by strident max-pooling, which is known as hard feature coding. The operational flow uses step-by-step valid convolutions that reflect feature extraction and downsampling concurrently, known as ‘soft’ feature coding. Using the feature maps obtained from these operational flows, they could highlight pertinent content in the images.
Karthiga et al. applied a deep convolutional neural network for feature extraction in the initial stages of their methodology. Balancing the training data and the training iteration contributed to the overall accuracy of the classification rate. By supplying the deep learning model with a large dataset, they circumvent the alternative of using conventional machine learning techniques in conjunction with handcrafted features, resulting in less classification accuracy [72].
Li et al. used a ductal instance-oriented pipeline to classify breast histopathology images using three levels of pixel-wise features. Their work used a combination of histogram features, co-occurrence features, and structural features to extract features from tissue-level segmentation masks. The histogram features express the distribution of tissues in the image, co-occurrence features can encode spatial relationships, and structure features can extract frequencies from layers inside and outside of the duct instance and capture changes in the structure’s shape [61].
Hirra et al. classified histopathological images using a patch-based deep learning model. Features extracted for this study were done through an unsupervised method using feature vectors made up of features from the histopathology image patches. The features are learned automatically by creating image patches of the same size. The supervised portion of their method involves a learning phase that interprets the extracted feature matrix using a backpropagation neural network [54].
Labrada and Barkana [73] developed a feature set that would adequately represent the characteristics of the nuclei present in histopathology images by extracting geometrical, directional, and intensity-based features. Thirty-three features were extracted from each segmented region within the images. The geometrical set consisted of 5 features: area, perimeter, roundness, area-to-perimeter ratio, the ratio of the segmented region to the area of the fitting rectangle (AR_ratio), and the number of cells segmented in the image. The directional set used spatial distances in the segmented regions, measuring from the ROI’s center to the cell’s outstanding borders. A pixel count was performed to trace the eight cardinal directions (north, south, east, west, northeast, northwest, southeast, and southwest) to map the cell’s shape for analytical purposes. The center of the ROI was determined by enclosing the region in a bounded box and then taking the intersection point between both midpoints of the length and width. Their algorithm then calculated a mean, standard deviation, and range for each given direction while considering all of the ROIs in the particular image, totaling 24 features. The AR_ratio feature from the geometrical set addresses a specific concern from the directional set. It accounts for certain areas of an ROI that may not get appropriately mapped according to the directional mapping. The intensity-based feature set was composed of 3 features and focused on extracting information about the brightness of the ROI by looking at the pixel values of the ROIs and then calculating the mean, standard deviation, and range for a total of 3 features.
Wang et al. designed a classification system for breast cancer histopathology images based on deep feature fusion and enhanced routing. Their network consisted of two parallel channels that could extract convolution and capsule features simultaneously. The features were fused through a fusion method to combine into more discriminative features. Semantic features extracted by CNN and spatial features extracted by CapsNet are fused [74].
Kate and Shukla used the neural network ResNet-18 pre-trained on ImageNet to perform intrinsic feature learning. [43]. Vaka et al. implemented phylogenetic diversity in their work, often used to identify the distribution of a group of species and the relationship between species. Using this, the five features they defined were the sum of the phylogenetic branch lengths of each species, the sum of phylogenetic distances, the mean nearest neighbor distance, phylogenetic species variability, and phylogenetic richness [55].
Vo et al. classified breast cancer histopathology images using discriminative features trained by an ensemble of DCNNs. By implementing an ensemble, they increased the prediction accuracy rate. Multiscale input images were applied to the ensemble network and passed through at least one CNN. The ensemble network expands the receptive field of the original image, covers global features, and can extract multiscale local features [45].
Kausar et al. extracted in-depth features from Haar wavelet-decomposed images and used multiscale discriminative features to classify multiclass breast histopathology images. Using a feature concatenation strategy, they used a deep CNN model incorporating multiscale convolution features [46]. Rakhlin et al. used a LightGBM, a highly efficient gradient-boosting decision tree, for supervised classification. A 2-class and 4-class classification was performed for normal and benign non-cancerous cases versus in situ and invasive cancerous cases [48]. Wang et al. extracted shape features, including area, perimeter, eccentricity, roundness, and circularity. Statistical values, including mean, standard deviation, relative smoothness, skewness of the histogram, uniformity, and entropy, were obtained as textural features to analyze the spatial distribution of gray values [66].

7. Classification/Detection/Diagnosis Algorithms

The classifier is a determining step in CADe and CADx systems’ algorithmic processes. After utilizing all the information acquired in the feature engineering process, classification algorithms can be trained for diagnosis and detection. Figure 5 illustrates the classifier approaches between 2010 and 2022.
The work in [70] implemented support vector machines, random forests, and Naïve Bayes classifiers as part of their classification architecture. Majority voting was applied to all three classification outputs after getting individual classifications from the classifiers to get the most accurate output. Their system performed well and adequately identified mitosis’s occurrence throughout the four different stages of mitosis. It had an accuracy of 86.38% for detecting mitotic cells in the MITO-ATYPIA 14 dataset.
Noumah et al. used colored stained images to develop an architecture that consisted of three pre-trained deep convolutional neural networks (DCNN) that worked in parallel. The output of each branch was passed onto a global average pooling layer, and the output of the three layers was then concatenated into one layer with 4640 neurons. Finally, dense layers were used to convert the 4640 neurons into two classes, either benign or malignant. Overall, their suggested model performed at an accuracy of 98% in determining the nature of a tumor [41].
Lin et al. proposed a framework comprising three stages: adversarial learning for domain adaptation, target domain data selection based on entropy, and model refinement with selected data and pseudo-labels. The atrous spatial pyramid pooling (ASPP) module was used to encode multiscale information into feature maps; this is directly followed by an upsampling layer with softmax output, which then upsamples the output dimensions to the input dimensions [60].
Jiang et al. used a specific classification task. They implemented it using an input-collaborative PSV-ConvNet that performs an end-to-end with no image color normalization and domain knowledge [71]. Yari et al. focused on a binary and multi-classification approach that could discern malignant and benign cases and different breast cancer types in the images. Their proposed model worked on magnification-dependent and magnification-independent classification methods and used ResNet50 transfer learning to supplement the low volume of the BreakHis dataset, which was not large enough for proper training. Using ResNet50 decreased the training error when implementing a standard optimization algorithm to train the network [75].
Karthiga et al. used the fine-tuned pre-trained models Alexnet and VGG-16 to achieve better performance classification. DCNN and transfer learning methods were also implemented for binary and multiclass classification. For the CNN, an architecture of 15 deep layers was used with learning parameters to implement the design [72]. Li et al. used various classifiers for their classification architecture, including a random forest model, a 3-degree polynomial SVM, an SVM with a radial basis function kernel, and a multilayer perception with four hidden layers. In binary classification, if the number of features was greater than the number of ROIs in a given task, principal component analysis (PCA) was performed to reduce the number of features to 20 dimensions. When classifying multiclass scenarios, a U-net extension with a separate branch for diagnostic classification was used [61].
Hirra et al. used fine-tuning as the second stage of deep belief network learning. During this portion of the learning, the model is assigned class labels. Then, they developed a model formed by the feature matrix of images from their design’s training portion to classify cancerous and non-cancerous regions. Logistic regression was used to classify the patches identified in the histopathology images [54].
Enhanced routing was used in [74] to assist in classification by optimizing routing coefficients indirectly and adaptively by modifying the loss function and embedding the routing process into the training process to learn routing coefficients.
Vaka et al. used SVM, random forest, multilayer perceptron (MLP), a type of deep artificial neural network, and eXtreme Gradient Boost (Xgboost), which is a library based on the gradient increase framework and can be used for regression and sorting [55].
Labrada and Barkana used four machine-learning algorithms to classify histopathology images from the BreakHis dataset, including decision trees, SVM, K-nearest neighbors, and narrow neural networks, in conjunction with PCA, to reduce the dimensionality of the dataset. The different feature sets were tested with each classifier, and their performance was recorded. Also, the feature sets were tested with each classifier as an entire group to gauge the performance of all feature sets working together. The most favorable result was obtained using all 33 features of the combined feature sets and a narrow neural network (NNN) that achieved an accuracy of 96.9% [73].
Yang et al. used a guided soft attention network to classify breast cancer histopathology images. A multi-task learning framework was implemented to design a CNN that could learn the ROI mask from the global image and guide the focus of the classification network [76].
Vo et al. used multiscale breast cell-extracted features and used them to train gradient-boosting tree classifiers. Combining the boosting tree classifiers with a DCNN achieved favorable classification results. A model combining majority voting and gradient-boosting trees achieved higher accuracy and sensitivity scores [45].

8. Performance Evaluation Metrics

Performance metrics are used to assess and validate the developed CAD systems. The histopathology datasets provide ground-truth labels for the benign or malignant tissues in the images. It makes it possible to calculate true positive (TP), true negative (TN), false positive (FP), and false negative (FN) metrics, which can be used to determine commonly used accuracy (Acc), sensitivity (Se), specificity (Sp), the receiver operating characteristics (ROC), the area under the curve (AUC), and the F1 score. Although these metrics are well known, we find it proper to present the calculation formula for each of them here.
  • TP represents the image correctly classified as malignant,
  • TN represents the image correctly classified as benign,
  • FP represents the image falsely classified as malignant, and
  • FN represents the image falsely classified as benign.
A c c = T P + T N T P + T N + F P + F N
S e = R e c a l l = T P T P + F N
S p = T N T N + F P
P r e c i s i o n = T P T P + F P
F 1   s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Improving the accuracy of the systems is a challenge without negatively impacting the precision and sensitivity of the systems. The higher sensitivity value means a higher value of the TP and a lower value of the FN. The lower sensitivity value means a lower value of the TP and a higher value of the FN. Sensitivity, also called recall, measures the CAD’s capability to detect positive instances, while specificity measures the correctly detected proportion of true negatives. Higher specificity means that the system correctly identifies a higher value of the TN. Balancing sensitivity and specificity is important in the chosen classifier model, as we cannot optimize both simultaneously. Sensitivity is more affected by imbalanced datasets than specificity since it is based on the occurrence of the positive class. In contrast, specificity is based on the occurrence of the negative class.
The F1 score shows the harmonic mean of the precision and recall of a system. Similar to accuracy and other metrics, we must be careful while interpreting the F1 score because it may be high due to imbalanced precision and recall. Applications focusing on detecting all true positives at the expense of producing more false positives can use the F2 measure. In breast cancer detection and diagnosis applications, increasing false positives is not preferred since it will lead to detrimental medical procedures and treatments.
The ROC curve plots the recall versus the false-positive rate with a classification threshold value. A false-positive rate is calculated as
F P R = 1 S p e c i f i c i t y = F P T N + F P
A ROC curve visualizes the performance of the CAD system. The AUC value measures the area under the ROC curve, which is between 0 and 1, as 1 represents a perfect classifier model.

9. Discussions and Conclusions

Research in computer-aided detection and diagnosis systems using histopathology images has been trending over the last two decades. Figure 4 shows the percentage of detection/diagnosis methodologies used over the previous twelve years. The most commonly used methods are transfer learning, CNN/DCNN, and SVM. The trend of designing and implementing deep learning in all aspects of life created a shift away from knowledge-based systems. Deep learning methods are replacing knowledge-based approaches for a couple of reasons. Advancements in computing technologies allow researchers to train networks in acceptable time frames. An increase in public-domain databases makes it possible to employ supervised algorithms. Table 8 summarizes the reviewed works for histopathology images from 2010 to 2023 regarding preprocessing, segmentation, feature extraction, and classification methods.
This review summarized the CAD systems using breast histopathology images regarding datasets, preprocessing, segmentation, feature engineering, classification methods, and performance metrics between 2010 and 2022. The preprocessing stage mainly consisted of data augmentation to increase the size of the dataset to prevent overfitting during network training. Image transformations included rotations, reflections, scaling, and shearing. Color modifications were also made in the preprocessing due to variations in staining and acquisition methods. Segmentation is a significant stage for analyzing the region of interest (ROI), extracting distinct features, and characterizing and labeling the ROIs. Deep learning became popular in nucleus segmentation and detection. The popular segmentation methods were adversarial learning, K-mean clustering, deep convolutional networks, wavelet decomposition, and fuzzy C-mean algorithms. Feature engineering is an essential part of a CAD system, either hand-crafted by a knowledge-based system or automatically extracted by a deep network. Hand-crafted features were mainly based on morphology, color, and texture information. Only about 5% of the classifiers were unsupervised methods, including fuzzy logic. The remaining procedures were supervised methods, as transfer learning, CNNs, and SVMs were the popular choices. ResNet-18, ResNet-50, Inception V3, VGG-16, VGG-19, and AlexNet were used to improve the performance of the classification. Binary classification was studied more than multiclass classification.
Collecting medical information is challenging because of health information privacy and ethical reasons, and it requires immense time and effort. Therefore, it is difficult to establish balanced datasets. Current breast cancer histopathology image datasets vary in size, resolution, and image quality. Consequently, most studies employ augmentation methods to balance the datasets. Random zooming, cropping, and horizontal and vertical flips were performed to increase the database size or to balance the unbalanced datasets. Because the artificially generated images depend on the dataset’s existing images, it may lead to overfitting in deep learning models. One way to prevent overfitting is to only use the artificially generated images in the training stage. Another way to prevent overfitting is not to use augmentation methods to increase image datasets but to use transfer learning models. A recent study by Rana and Bhushan reported the results of a transfer learning model without using any augmentation methods [66]. They used seven transfer learning models, including LENET, VGG16, DarkNet53, DarkNet19, ResNet50, Inception, and Xception, on the BreakHis dataset. The best performance was achieved by Xception at 83.07%. The same work proposed a parameter for unbalanced datasets and achieved an accuracy of 87.17% with the DarkNet53 model.
We observed that there had been a significant decrease in research developing hand-crafted feature extraction techniques requiring expert-domain knowledge and deformable segmentation methods. At the same time, the number of deep learning-based models increased significantly. The advances in artificial intelligence and machine learning techniques will continue to attract researchers to design deep learning-based CAD systems. Image transformers have become an attractive approach to computer vision in recent years. They can be the next tool implemented in histopathology images. Deep learning models do not require content knowledge, expert input, or feedback other than the datasets labeled by experts. After the training stage, deep learning automatically extracts features to characterize the ROIs; however, it is a black-box approach, and it is unclear how those features are calculated and what they represent. Thus, it is important to pay extra attention while developing and using deep learning approaches, especially in healthcare applications. The fusion of expert knowledge and deep learning can be a solution to improve the confidence and performance of CAD systems.
As deep learning models are rapidly replacing knowledge-based CAD models, there is an urgent need for large breast cancer histopathology image datasets.

Author Contributions

Conceptualization, A.L. and B.D.B.; methodology, A.L. and B.D.B.; formal analysis, A.L. and B.D.B.; investigation, A.L. and B.D.B.; resources, A.L. and B.D.B.; data curation, A.L.; writing—original draft preparation, A.L. and B.D.B.; writing—review and editing, A.L. and B.D.B.; supervision, B.D.B.; project administration, B.D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The text and references include links to publicly archived datasets.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cancer Facts & Figures 2022. American Cancer Society. Available online: https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2022.html (accessed on 16 August 2022).
  2. Stump-Sutliff, K.A. Breast Cancer: What Are the Survival Rates? WebMD. Available online: https://www.webmd.com/breast-cancer/guide/breast-cancer-survival-rates (accessed on 16 August 2022).
  3. U.S. Breast Cancer Statistics. Breastcancer.org. 13 January 2022. Available online: https://www.breastcancer.org/symptoms/understand_bc/statistics (accessed on 16 August 2022).
  4. Breast Cancer—Metastatic: Statistics|cancer.net. Available online: https://www.cancer.net/cancer-types/breast-cancer-metastatic/statistics (accessed on 17 August 2022).
  5. Veta, M.; Pluim, J.P.; van Diest, P.J.; Viergever, M.A. Breast Cancer Histopathology Image Analysis: A Review. IEEE Trans. Biomed. Eng. 2014, 61, 1400–1411. [Google Scholar] [CrossRef] [PubMed]
  6. Łukasiewicz, S.; Czeczelewski, M.; Forma, A.; Baj, J.; Sitarz, R.; Stanisławek, A. Breast Cancer—Epidemiology, Risk Factors, Classification, Prognostic Markers, and Current Treatment Strategies—An Updated Review. Cancers 2021, 13, 4287. [Google Scholar] [CrossRef]
  7. Angahar, T.L. An overview of breast cancer epidemiology, risk factors, pathophysiology, and cancer risks reduction. MOJ Biol. Med. 2017, 1, 92–96. [Google Scholar] [CrossRef]
  8. Nassif, A.B.; Talib, M.A.; Nasir, Q.; Afadar, Y.; Elgendy, O. Breast cancer detection using artificial intelligence techniques: A systematic literature review. Artif. Intell. Med. 2022, 127, 102276. [Google Scholar] [CrossRef] [PubMed]
  9. Yassin, N.I.; Omran, S.; El Houby, E.M.; Allam, H. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. Comput. Methods Programs Biomed. 2018, 156, 25–45. [Google Scholar] [CrossRef]
  10. Petrelli, F.; Viale, G.; Cabiddu, M.; Barni, S. Prognostic value of different cut-off levels of Ki-67 in breast cancer: A systematic review and meta-analysis of 64,196 patients. Breast Cancer Res. Treat. 2015, 153, 477–491. [Google Scholar] [CrossRef] [PubMed]
  11. Luporsi, E.; Andre, F.; Spyratos, F.; Martin, P.M.; Jacquemier, J.; Penault-Llorca, F.; Tubiana-Mathieu, N.; Sigal-Zafrani, B.; Arnould, L.; Gompel, A.; et al. Ki-67: Level of evidence and methodological considerations for its role in the clinical management of breast cancer: Analytical and critical review. Breast Cancer Res. Treat. 2012, 132, 895–915. [Google Scholar] [CrossRef]
  12. Saxena, S.; Gyanchandani, M. Machine learning methods for computer-aided breast cancer diagnosis using histopathology: A narrative review. J. Med. Imaging Radiat. Sci. 2020, 51, 182–193. [Google Scholar] [CrossRef]
  13. Zhou, X.; Li, C.; Rahaman, M.M.; Yao, Y.; Ai, S.; Sun, C.; Wang, Q.; Zhang, Y.; Li, M.; Li, X. A comprehensive review for breast histopathology image analysis using classical and deep neural networks. IEEE Access 2020, 8, 90931–90956. [Google Scholar] [CrossRef]
  14. Abhisheka, B.; Biswas, S.K.; Purkayastha, B. A comprehensive review on breast cancer detection, classification and segmentation using deep learning. Arch. Comput. Methods Eng. 2023, 30, 5023–5052. [Google Scholar] [CrossRef]
  15. Kaushal, C.; Bhat, S.; Koundal, D.; Singla, A. Recent trends in computer-assisted diagnosis (CAD) system for breast cancer diagnosis using histopathological images. IRBM 2019, 40, 211–227. [Google Scholar] [CrossRef]
  16. PRISMA Transparent Reporting of Systematic Reviews and Meta-Analyses. 2017. Available online: http://www.prismastatement.org/ (accessed on 18 October 2023).
  17. Accardi, T. Mammography Matters: Screening for Breast Cancer—Then and Now. Available online: https://www.radiologytoday.net/archive/rt0517p7.shtml#:~:text=Although%20the%20concept%20of%20mammography,Society%20to%20officially%20recommend%20it (accessed on 24 August 2022).
  18. Mammograms. National Cancer Institute. Available online: https://www.cancer.gov/types/breast/mammograms-fact-sheet (accessed on 24 August 2022).
  19. Sree, S.V. Breast Imaging: A survey. World J. Clin. Oncol. 2011, 2, 171. [Google Scholar] [CrossRef] [PubMed]
  20. Dempsey, P.J. The history of breast ultrasound. J. Ultrasound Med. 2004, 23, 887–894. [Google Scholar] [CrossRef] [PubMed]
  21. Breast Ultrasound. Johns Hopkins Medicine. 8 August 2021. Available online: https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/breast-ultrasound#:~:text=A%20breast%20ultrasound%20is%20most,some%20early%20signs%20of%20cancer (accessed on 24 August 2022).
  22. What Is a Breast Ultrasound?: Breast Cancer Screening. American Cancer Society. Available online: https://www.cancer.org/cancer/breast-cancer/screening-tests-and-early-detection/breast-ultrasound.html (accessed on 24 August 2022).
  23. Kelly, K.M.; Dean, J.; Comulada, W.S.; Lee, S.-J. Breast cancer detection using automated whole breast ultrasound and mammography in radiographically dense breasts. Eur. Radiol. 2009, 20, 734–742. [Google Scholar] [CrossRef] [PubMed]
  24. Heywang, S.H.; Hahn, D.; Schmidt, H.; Krischke, I.; Eiermann, W.; Bassermann, R.; Lissner, J. MR imaging of the breast using gadolinium-DTPA. J. Comput. Assist. Tomogr. 1986, 10, 199–204. [Google Scholar] [CrossRef] [PubMed]
  25. What Is a Breast MRI: Breast Cancer Screening. American Cancer Society. Available online: https://www.cancer.org/cancer/breast-cancer/screening-tests-and-early-detection/breast-mri-scans.html (accessed on 24 August 2022).
  26. History of Breast Biopsy. Siemens Healthineers. Available online: https://www.siemens-healthineers.com/mammography/news/history-of-breast-biopsy.html (accessed on 24 August 2022).
  27. Breast Biopsy: Biopsy Procedure for Breast Cancer. American Cancer Society. Available online: https://www.cancer.org/cancer/breast-cancer/screening-tests-and-early-detection/breast-biopsy.html (accessed on 24 August 2022).
  28. Breast Biopsy. Johns Hopkins Medicine. 8 August 2021. Available online: https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/breast-biopsy (accessed on 24 August 2022).
  29. Versaggi, S.L.; De Leucio, A. Breast Biopsy. Available online: https://www.ncbi.nlm.nih.gov/books/NBK559192/ (accessed on 24 August 2022).
  30. Barkana, B.D.; Saricicek, I. Classification of Breast Masses in Mammograms using 2D Homomorphic Transform Features and Supervised Classifiers. J. Med. Imaging Health Inform. 2017, 7, 1566–1571. [Google Scholar] [CrossRef]
  31. Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. A dataset for breast cancer histopathological image classification. IEEE Trans. Biomed. Eng. 2016, 63, 1455–1462. [Google Scholar] [CrossRef]
  32. Bukun, Breast Cancer Histopathological Database (BreakHis), Kaggle. 10 March 2020. Available online: https://www.kaggle.com/ambarish/breakhis (accessed on 5 February 2022).
  33. Mooney, P. Breast Histopathology Images. Kaggle. 19 December 2017. Available online: https://www.kaggle.com/paultimothymooney/breast-histopathology-images (accessed on 5 February 2022).
  34. Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. BACH: Grand challenge on breast cancer histology images. Med. Image Anal. 2019, 56, 122–139. [Google Scholar] [CrossRef]
  35. ICIAR 2018—Grand Challenge. Grand Challenge. Available online: https://iciar2018-challenge.grand-challenge.org/Dataset/ (accessed on 5 February 2022).
  36. Wahab, N.; Khan, A. Multifaceted fused-CNN based scoring of breast cancer whole-slide histopathology images. Appl. Soft Comput. 2020, 97, 106808. [Google Scholar] [CrossRef]
  37. Veta, M.; Heng, Y.J.; Stathonikos, N.; Bejnordi, B.E.; Beca, F.; Wollmann, T.; Rohr, K.; Shah, M.A.; Wang, D.; Rousson, M.; et al. Predicting breast tumor proliferation from whole-slide images: The TUPAC16 challenge. Med. Image Anal. 2019, 56, 43, Erratum in Med Image Anal. 2019, 54, 111–121. [Google Scholar] [CrossRef] [PubMed]
  38. Mitos-ATYPIA-14—Grand Challenge. Available online: https://mitos-atypia-14.grand-challenge.org/Dataset/ (accessed on 30 August 2022).
  39. Ludovic, R.; Daniel, R.; Nicolas, L.; Maria, K.; Humayun, I.; Jacques, K.; Frédérique, C.; Catherine, G. Mitosis detection in breast cancer histological images An ICPR 2012 contest. J. Pathol. Inform. 2013, 4, 8. [Google Scholar] [CrossRef] [PubMed]
  40. Kashyap, R. Breast cancer histopathological image classification using stochastic dilated residual ghost model. Int. J. Inf. Retr. Res. 2022, 12, 1–24. [Google Scholar]
  41. Al Noumah, W.; Jafar, A.; Al Joumaa, K. Using parallel pre-trained types of DCNN model to predict breast cancer with color normalization. BMC Res. Notes 2022, 15, 14. [Google Scholar] [CrossRef] [PubMed]
  42. Boumaraf, S.; Liu, X.; Zheng, Z.; Ma, X.; Ferkous, C. A new transfer learning-based approach to magnification dependent and independent classification of breast cancer in histopathological images. Biomed. Signal Process. Control 2021, 63, 102192. [Google Scholar] [CrossRef]
  43. Kate, V.; Shukla, P. A new approach to breast cancer analysis through histopathological images using MI, MD binary, and eight class classifying techniques. J. Ambient. Intell. Human Comput. 2021. [Google Scholar] [CrossRef]
  44. Hameed, Z.; Zahia, S.; Garcia-Zapirain, B.; Javier Aguirre, J.; María Vanegas, A. Breast Cancer Histopathology Image Classification Using an Ensemble of Deep Learning Models. Sensors 2020, 20, 4373. [Google Scholar] [CrossRef]
  45. Vo, D.M.; Nguyen, N.-Q.; Lee, S.-W. Classification of breast cancer histology images using incremental boosting convolution networks. Inf. Sci. 2019, 482, 123–138. [Google Scholar] [CrossRef]
  46. Kausar, T.; Wang, M.J.; Idrees, M.; Lu, Y. HWDCNN: Multiclass recognition in breast histopathology with Haar wavelet decomposed image based convolution neural network. Biocybern. Biomed. Eng. 2019, 39, 967–982. [Google Scholar] [CrossRef]
  47. Li, X.; Radulovic, M.; Kanjer, K.; Plataniotis, K.N. Discriminative pattern mining for breast cancer histopathology image classification via fully convolutional autoencoder. IEEE Access 2019, 7, 36433–36445. [Google Scholar] [CrossRef]
  48. Rakhlin, A.; Shvets, A.; Iglovikov, V.; Kalinin, A.A. Deep convolutional neural networks for breast cancer histology image analysis. In Proceedings of the Image Analysis and Recognition: 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, 27–29 June 2018; pp. 737–744. [Google Scholar]
  49. Anghel, A.; Stanisavljevic, M.; Andani, S.; Papandreou, N.; Rüschoff, J.H.; Wild, P.; Gabrani, M.; Pozidis, H. A high-performance system for robust stain normalization of whole-slide images in histopathology. Front. Med. 2019, 6, 193. [Google Scholar] [CrossRef] [PubMed]
  50. Romano, A.M.; Hernandez, A.A. Enhanced deep learning approach for predicting invasive ductal carcinoma from histopathology images. In Proceedings of the 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 25–28 May 2019. [Google Scholar]
  51. Chang, J.; Yu, J.; Han, T.; Chang, H.-J.; Park, E. A method for classifying medical images using transfer learning: A pilot study on histopathology of breast cancer. In Proceedings of the 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom), Dalian, China, 12–15 October 2017. [Google Scholar]
  52. Yari, Y.; Nguyen, T.V.; Nguyen, H.T. Deep learning applied for histological diagnosis of breast cancer. IEEE Access 2020, 8, 162432–162448. [Google Scholar] [CrossRef]
  53. Mikolajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018. [Google Scholar]
  54. Hirra, I.; Ahmad, M.; Hussain, A.; Ashraf, M.U.; Saeed, I.A.; Qadri, S.F.; Alghamdi, A.M.; Alfakeeh, A.S. Breast cancer classification from histopathological images using patch-based Deep Learning Modeling. IEEE Access 2021, 9, 24273–24287. [Google Scholar] [CrossRef]
  55. Vaka, A.R.; Soni, B.; Reddy, S. Breast cancer detection by leveraging machine learning. ICT Express 2020, 6, 320–324. [Google Scholar] [CrossRef]
  56. Chatterjee, C.C.; Krishna, G. A novel method for IDC prediction in breast cancer histopathology images using deep residual neural networks. In Proceedings of the 2019 2nd International Conference on Intelligent Communication and Computational Techniques (ICCT), Jaipur, India, 28–29 September 2019. [Google Scholar]
  57. Narayanan, B.N.; Krishnaraja, V.; Ali, R. Convolutional neural network for classification of histopathology images for breast cancer detection. In Proceedings of the 2019 IEEE National Aerospace and Electronics Conference (NAECON), Dayton, OH, USA, 15–19 July 2019. [Google Scholar]
  58. Jiang, Y.; Chen, L.; Zhang, H.; Xiao, X. Classification of H&E stained breast cancer histopathology images based on Convolutional Neural Network. J. Phys. Conf. Ser. 2019, 1302, 032018. [Google Scholar]
  59. Image Segmentation: The Basics and 5 Key Techniques. Datagen. 25 October 2022. Available online: https://datagen.tech/guides/image-annotation/image-segmentation/ (accessed on 13 January 2023).
  60. Lin, Z.; Li, J.; Yao, Q.; Shen, H.; Wan, L. Adversarial learning with data selection for cross-domain histopathological breast cancer segmentation. Multimed. Tools Appl. 2022, 81, 5989–6008. [Google Scholar] [CrossRef]
  61. Li, B.; Mercan, E.; Mehta, S.; Knezevich, S.; Arnold, C.W.; Weaver, D.L.; Elmore, J.G.; Shapiro, L.G. Classifying breast histopathology images with a ductal instance-oriented pipeline. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021. [Google Scholar]
  62. Tan, X.J.; Mustafa, N.; Mashor, M.Y.; Rahman, K.S. Spatial neighborhood intensity constraint (SNIC) clustering framework for tumor region in breast histopathology images. Multimed. Tools Appl. 2022, 81, 18203–18222. [Google Scholar] [CrossRef]
  63. Sebai, M.; Wang, T.; Al-Fadhli, S.A. Partmitosis: A partially supervised deep learning framework for mitosis detection in breast cancer histopathology images. IEEE Access 2020, 8, 45133–45147. [Google Scholar] [CrossRef]
  64. Priego-Torres, B.M.; Sanchez-Morillo, D.; Fernandez-Granero, M.A.; Garcia-Rojo, M. Automatic segmentation of whole-slide H&E stained breast histopathology images using a deep convolutional neural network architecture. Expert Syst. Appl. 2020, 151, 113387. [Google Scholar]
  65. Belsare, A.D.; Mushrif, M.M.; Pangarkar, M.A.; Meshram, N. Breast histopathology image segmentation using spatio-colour-texture based graph partition method. J. Microsc. 2015, 262, 260–273. [Google Scholar] [CrossRef] [PubMed]
  66. Wang, P.; Hu, X.; Li, Y.; Liu, Q.; Zhu, X. Automatic cell nuclei segmentation and classification of breast cancer histopathology images. Signal Process. 2016, 122, 1–13. [Google Scholar] [CrossRef]
  67. Kaushal, C.; Singla, A. Automated segmentation technique with self-driven post-processing for histopathological breast cancer images. CAAI Trans. Intell. Technol. 2020, 5, 294–300. [Google Scholar] [CrossRef]
  68. Zhang, X.; Zhu, X.; Tang, K.; Zhao, Y.; Lu, Z.; Feng, Q. DDTNet: A dense dual-task network for tumor-infiltrating lymphocyte detection and segmentation in histopathological images of breast cancer. Med. Image Anal. 2022, 78, 102415. [Google Scholar] [CrossRef]
  69. Wahab, N.; Khan, A.; Lee, Y.S. Transfer learning based deep CNN for segmentation and detection of mitoses in breast cancer histopathological images. Microscopy 2019, 68, 216–233. [Google Scholar] [CrossRef] [PubMed]
  70. Rehman, M.U.; Akhtar, S.; Zakwan, M.; Mahmood, M.H. Novel architecture with selected feature vector for effective classification of mitotic and non-mitotic cells in breast cancer histology images. Biomed. Signal Process. Control 2022, 71, 103212. [Google Scholar] [CrossRef]
  71. Jiang, H.; Li, S.; Li, H. Parallel ‘same’ and ‘valid’ convolutional block and input-collaboration strategy for Histopathological Image Classification. Appl. Soft Comput. 2022, 117, 108417. [Google Scholar] [CrossRef]
  72. Karthiga, R.; Narashimhan, K. Deep Convolutional Neural Network for computer-aided detection of breast cancer using histopathology images. J. Phys. Conf. Ser. 2021, 1767, 012042. [Google Scholar] [CrossRef]
  73. Labrada, A.; Barkana, B.D. Breast cancer diagnosis from histopathology images using supervised algorithms. In Proceedings of the 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS), Shenzen, China, 21–23 July 2022. [Google Scholar]
  74. Wang, P.; Wang, J.; Li, Y.; Li, P.; Li, L.; Jiang, M. Automatic classification of breast cancer histopathological images based on deep feature fusion and enhanced routing. Biomed. Signal Process. Control 2021, 65, 102341. [Google Scholar] [CrossRef]
  75. Yari, Y.; Nguyen, H.; Nguyen, T.V. Accuracy improvement in binary and multiclass classification of breast histopathology images. In Proceedings of the 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE), Phu Quoc Island, Vietnam, 13–15 January 2021. [Google Scholar]
  76. Yang, H.; Kim, J.-Y.; Kim, H.; Adhikari, S.P. Guided Soft Attention Network for classification of breast cancer histopathology images. IEEE Trans. Med. Imaging 2020, 39, 1306–1315. [Google Scholar] [CrossRef]
  77. Kode, H.; Barkana, B.D. Deep Learning- and Expert Knowledge-Based Feature Extraction and Performance Evaluation in Breast Histopathology Images. Cancers 2023, 15, 3075. [Google Scholar] [CrossRef] [PubMed]
  78. Rana, M.; Bhushan, M. Classifying breast cancer using transfer learning models based on histopathological images. Neural Comput. Appl. 2023, 35, 14243–14257. [Google Scholar] [CrossRef]
  79. Boumaraf, S.; Liu, X.; Wan, Y.; Zheng, Z.; Ferkous, C.; Ma, X.; Li, Z.; Bardou, D. Conventional machine learning versus deep learning for magnification dependent histopathological breast cancer image classification: A comparative study with visual explanation. Diagnostics 2021, 11, 528. [Google Scholar] [CrossRef] [PubMed]
  80. Carvalho, E.D.; Filho, A.O.C.; Silva, R.R.V.; Araújo, F.H.D.; Diniz, J.O.B.; Silva, A.C.; Paiva, A.C.; Gattass, M. Breast cancer diagnosis from histopathological images using textural features and CBIR. Artif. Intell. Med. 2020, 105, 101845. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) The number of studies in CADe and CADx systems using breast histopathology images. (b) PRISMA flow diagram for the review of CADe and CADx systems using breast histopathology images.
Figure 1. (a) The number of studies in CADe and CADx systems using breast histopathology images. (b) PRISMA flow diagram for the review of CADe and CADx systems using breast histopathology images.
Bioengineering 10 01289 g001
Figure 2. Organization of the review of analysis and diagnosis of breast cancer from histopathology images.
Figure 2. Organization of the review of analysis and diagnosis of breast cancer from histopathology images.
Bioengineering 10 01289 g002
Figure 3. Histopathology images from BreakHis 400× dataset. The blue, green, yellow, and red arrows indicate adipose tissue, a cell nucleus, a mitotic figure, and large nuclei in the images. The image in (a) is labeled malignant, while the image in (b) is labeled benign.
Figure 3. Histopathology images from BreakHis 400× dataset. The blue, green, yellow, and red arrows indicate adipose tissue, a cell nucleus, a mitotic figure, and large nuclei in the images. The image in (a) is labeled malignant, while the image in (b) is labeled benign.
Bioengineering 10 01289 g003
Figure 4. Images in (a,b) are from the BreakHis ×400 dataset. The image in (c) is from the BACH dataset. Images show various stain colors and illuminations.
Figure 4. Images in (a,b) are from the BreakHis ×400 dataset. The image in (c) is from the BACH dataset. Images show various stain colors and illuminations.
Bioengineering 10 01289 g004
Figure 5. Distribution of the machine learning methods in CADe and CADx systems using breast histopathology images.
Figure 5. Distribution of the machine learning methods in CADe and CADx systems using breast histopathology images.
Bioengineering 10 01289 g005
Table 1. Most used publicly available histopathology image datasets and their corresponding URL.
Table 1. Most used publicly available histopathology image datasets and their corresponding URL.
Dataset NameURL
The Breast Cancer Histopathological Image Classification (BreakHis)https://www.kaggle.com/ambarish/breakhis (accessed on 28 April 2023.)
The Kaggle Breast Cancer Histopathology Imageshttps://www.kaggle.com/paultimothymooney/breast-histopathology-images (accessed on 28 April 2023.)
The ICIAR 2018 Grand Challenge on Breast Cancer Histology images (BACH)https://iciar2018-challenge.grand-challenge.org/Dataset/ (accessed on 28 April 2023.)
Tumor Proliferation Assessment Challenge 2016 (TUPAC16)https://github.com/DeepPathology/TUPAC16_AlternativeLabels (accessed on 28 April 2023.)
MITOS-ATYPIA-14 challengehttps://mitos-atypia-14.grand-challenge.org/Dataset/ (accessed on 28 April 2023.)
International Conference on Pattern Recognition (ICPR 2012) datasethttp://ludo17.free.fr/mitos_2012/download.html (accessed on 28 April 2023.)
Table 2. Image distribution by magnification factor and class for BreakHis dataset.
Table 2. Image distribution by magnification factor and class for BreakHis dataset.
MagnificationBenignMalignantTotal
×4062513701995
×10064414372081
×20062313902013
×40058812321820
Total images248054297909
Table 3. Image distribution by magnification factor and class for the Kaggle dataset.
Table 3. Image distribution by magnification factor and class for the Kaggle dataset.
MagnificationBenignMalignantTotal
×40198,73878,786277,524
Table 4. Image distribution by class for ICIAR 2018 dataset.
Table 4. Image distribution by class for ICIAR 2018 dataset.
MagnificationNormalBenignIn Situ
Carcinoma
Invasive CarcinomaTotal
×200100100100100400
Table 5. Image distribution by class for the TUPAC16 dataset.
Table 5. Image distribution by class for the TUPAC16 dataset.
Score 1Score 2Score 3PAM50 Score (Mean ± STD)
Training236 (47%)117(23%)147(30%)−0.166 ± 0.446
Testing147 (46%)77(24%)97(30%)−0.192 ± 0.400
Table 6. Image distribution by class for the MITOS-ATYPIA-14 dataset.
Table 6. Image distribution by class for the MITOS-ATYPIA-14 dataset.
MagnificationNumber of FramesInformation
×20284Nuclear atypia score as a number 1, 2, or 3
×401136Atypia scoring regarding the size of nuclei, size of nucleoli, the density of chromatin, thickness of the nuclear membrane, regularity of the nuclear contour, and anisonucleosis.
Table 7. Mitotic cell count distribution over different scanners used for the ICPR 2012 dataset.
Table 7. Mitotic cell count distribution over different scanners used for the ICPR 2012 dataset.
Data SetsBoth ScannersMultispectral Microscope
Training: 35 HPF226224
Evaluation: 15 HPF10098
Total326322
Table 8. Summary of the reviewed works between 2010 and 2023: preprocessing, segmentation, feature extraction, and classification methods. The table is arranged in chronological order.
Table 8. Summary of the reviewed works between 2010 and 2023: preprocessing, segmentation, feature extraction, and classification methods. The table is arranged in chronological order.
Work YearDatasetPreprocessingSegmentationFeaturesClassifierPerformance
2023
[77]
BreakHis--Seven transfer learning models, VGG16, Darknet19, DarkNet53, LENET, ResNet50, Inception, and Xception-2-class:
VGG16: 67.51%
Darknet19: 80.57%
DarkNet53: 70.59%
LENET: 75.99%
ResNet50: 81.85%
Inception: 80.5%
Xception: 83.09%
2023
[78]
BreakHis--Convolutional Neural Network, (2) a transfer learning architecture VGG16Neural Network (64 units), Random Forest, Multilayer Perceptron, Decision Tree, Support Vector Machines, K-Nearest Neighbors, and Narrow Neural Network (10 units)Magnification: 400×
CNN achieved up to 85% for the Neural Network and Random Forest, the VGG16 method achieved up to 86% for the Neural Network
2022
[68]
Two public datasets and a new dataset:
Bca-lym, Post-NAT-BRCA, TCGA-lym
-Dense dual-task network (DDTNet)Spatial and context cues, the multi-scale features with lymphocyte location informationAll networks using Pytorch 1.1.0 and a NVIDIA GeForce RTX 2080 Ti GPUSegmentation performance:
Bca-lym dataset: Dice: 85.6%
Post-NAT-BRCA dataset: Dice: 83.6%
TCGA-lym dataset:
Dice: 77.8%
2022
[40]
BreakHis
BreCaHAD
Contrast-limited adaptive histogram equalization;
Data augmentation
-Ghost featuresStochastic Dilated
Residual Ghost (SDRG) Model including ghost unit, stochastic downsampling, stochastic up-sampling
units, and other convolution layers
BreakHis (x40)
Original (93.13 ± 4.36)
Augmented (98.41 ± 1.00)
BreCaHAD
Original (95.23 ± 4.38)
Augmented (98.60 ± 0.99)
2022
[41]
BreakHisStain color normalization by
Vahadane method;
Random Zoom Augmentation with value 2, Random Rotation Augmentation with a value of 90° and Horizontal and Vertical Flip Augmentation
--Three pre-trained deep convolutional neural networks work in parallel (xception, NASNet, and eptoin_resnet_V2)The range of threshold values: 50–97%
The range of accuracy depending on the threshold value: 96–98%
2022
[60]
Private datasetColor augmentation, HE-stained and IHC-stainedSegmentation networks: Deeplab_v2, Linknet, Pspnet-Domain adaptation framework: Adversarial learning, Target domain data selection, Model refinement, Atrous Spatial Pyramid PoolingDice on HE: 87.9%
Dice on IHC: 84.6%
2022
[62]
Private dataset of a total of 200 images at 10× magnificationHistogram matching algorithm for color normalizationSpatial neighborhood intensity constraint (SNIC) and knowledge-based clustering frameworkSpatial information K-Mean clustering algorithm91.2%
2022
[70]
MITOS 2012
AMIDA 2013
MITOS 2014
TUPAC 2016
--Three features vector sets
Extended Local Pattern features,
GLCM features from grayscale, GLCM features from V channel of HSV image
SVM,
Random Forest
Naïve Bayes
Majority voting
MITOS 2012
Majority voting: F score: 95.64%
MITOS 2014
Majority voting: F score: 86.38%
AMIDA 13
Majority voting: F score: 73.09%
TUPAC 16
Majority voting: F score: 78.25%
2022
[71]
DS1, DS2, DS3--Step-by-step valid convolutionsInput-collaborative PSV
ConvNet
DS2: 90.4–93%
2022
[73]
BreakHisHistogram EqualizationOtsu’s thresholding method using Red ChannelGeometrical Features
Directional Features
Intensity-based features
Decision Tree: Fine tree
Linear SVM
Fine KNN
Narrow Neural Network (NNN)
2 class:
NNN: 96.9%
2021
[72]
BreakHis--DCNNAlexnet, VGG-16
Transfer learning methods, DCNN
2-class:
40×: 94%
100×: 95.45%
200×: 98.36%
400×: 85.71%
2021
[42]
BreakHisGlobal contrast normalization;
Three-fold data augmentation on training data
-ResNet-18Transfer learning based on block-wise fine-tuning strategyMI classification:
Binary: 98.42%
Eight-class: 92.03%
MD classification:
Binary: 98.84%
Eight-class: 92.15%
2021
[54]
The HUP 239 images, CINJ 40 images and TCGS 195, CWRU 110 images Reduced image size, RGB to grayscale conversion, smoothing by Gaussian Filter-Unsupervised pre-training and supervised fine-tuning phasePatch-based deep learning method called Pa-DBN-BC, Deep Belief Network (DBN),
Logistic regressions
Overall: 86%
2021
[74]
BreaKHis--Convolution and capsule features
Integrated sematic and special features
Deep feature fusion and enhanced routing,
FE-BkCapsNet
2-class:
40×: 92.71%
100×: 94.52%
200×: 94.03%
400×: 93.54%
2021
[79]
BreaKHisColor normalization technique-Feature Extraction-Based CML Approaches, Zernike moments, Haralick, and color histogram featuresConventional machine learning (CML) and deep learning (DL)-based methods2-class:
DL: 94.05–98.13%
CML: 85.65–89.32%
8-class:
DL: 76.77–88.95%
CML: 63.55–69.69%
2020
[67]
Two small datasets:
50 images of 11 patients;
30 H&E marked 40× magnified images
Median filter, Bottom + Top Hat filterIdentifying thresholds based on the energy curve, finding the best threshold using the entropyArea, major axis length, minor axis length-Dataset 1: 93.1%
Dataset 2: 93.5%
2020
[76]
BACH datasetData augmentation by color normalization, vertical and horizontal mirroring, random rotations,
addition of random noise and random change in intensity
of the images
-CNN-based feature extraction networkRegion Guided
Soft Attention
90.25%
2020
[80]
BACH 2018--Indexes based on phylogenetic diversity.SVM,
Random Forest
MLP
XGBoost
4-class: 95%
2020
[55]
Private dataset of 8009 histopathology images from over 683 patients with different magnification levelsGaussian filtering technique for noise removal, data augmentation by rotation Histo-sigmoid-based fuzzy clustering-Deep Neural Network with Support Value (DNNS)97.21%
2020
[44]
Private datasetData augmentation-Multi-level and multiscale deep featuresEnsemble of fine-tuned VGG16 and fined tuned VGG19Up to 95.29%
2020
[52]
BreakHisData augmentation, random horizontal flip, color jitter, random rotation, and crop-Feature mapsDeep transfer learning-based models: DensNet and ResNet, ResNet101, VGG19, AlexNet, and SqueezeNet2-class:
BreakHis (40×): 100%
BreakHis (100×): 100%
BreakHis (200×): 98.08%
BreakHis (400×): 98.99%
Multi-class:
BreakHis (40×): 97.96%
BreakHis (100×): 97.14%
BreakHis (200×): 95.19%
BreakHis (400×): 94.95%
2020
[61]
Private dataset
consists of 428 images from 240 breast biopsies
-Ductal Instance-Oriented Pipeline (DIOP) segmentation model:
a duct-level
instance segmentation model,
tissue-level semantic segmentation model, three levels of
features
Histogram features
Co-occurrence features
Structural features
Random forest model,
3-degree polynomial SVM
SVM-RBF
Multilayer perception with four hidden layers
2-class:
Invasive vs. non invasive: 95%
Atypia and DCIS vs Benign: 79%
DCIS vs. Atypia: 90%
Multi-class: 70%
2020
[63]
ICPR 2012 MITOSIS Dataset,
2014 ICPR dataset, and the AMIDA13 dataset
-Segmentation branch trained with weak and strong labelsConvolution features
Pre-trained and fine-tuned
Partially supervised framework based on two parallel, deep fully convolutional networks
2012 ICPR MITOSIS dataset
F-scores: 0.788
2014 ICPR dataset:
F-scores: 0.575
AMIDA13 dataset:
F-scores: 0.698
2020
[64]
Dataset of 640 H&E-stained breast histopathology imagesData augmentation by random zooming, cropping, horizontal and vertical flipsA tile-wise segmentation strategy, (a) direct tile-wise merging; (b) tile-wise merging based on a Conditional Random Field (CRF)-DCNN-based architectureXception 65: 95.62%
Mobilenet v2: 92.9%
Resnet v1: 91.16%
2019
[45]
-Bioimaging-2015
-BreakHis
Stain color normalization;
Logarithmic transformation;
Data Augmentation
-Ensemble of DCNNsGradient boosting trees classifier Bioimaging-2015 (4-class): 96.4%
Bioimaginf-2015 (2-class): 99.5%
BreakHis (40×): 95.1%
BreakHis (100×): 96.3%
BreakHis (200×): 96.9%
BreakHis (400×): 93.8%
2019
[46]
ICIAR 2018
BreakHis
Stain color normalization;
Image decomposition via Haar wavelet;
Data Augmentation
-Deep features from Haar wavelet decomposed images by a CNN model;
Incorporation of multiscale discriminant features
Three fully connected
two Dropout and SoftMax layers
ICIAR 2018 (2 and 4-class): 98.2%
BreakHis (Multi-class): 96.85%
2019
[50]
Data Augmentation-Feature vectorsCNN with IDC patch-based classification85.41%
2019
[57]
BreakHisContrast enhancement by histogram Equalization, color constancy-CNN features5 Convolutional layers
Fully connected and SoftMax layer
Hist. Equalization with the proposed method:
AUC: 87.6%
Color constancy with the proposed method:
AUC: 93.5%
2019
[58]
Bioimaging Challenge 2015Singular value decomposition (SVD), Logarithmic transformation-CNN based on the SE-ResNet module
GoogleNet, Xception, Inception-ResNet, 3-Norm pooling method
KNN
SVM
SVM-GoogleLeNet
2-class: 91.67%
4-class: 83.33%
2019
[69]
TUPAC 16
MITOS12 + MITOS14
Stain normalization
Annotation
Cropping
Transfer Learning-based Mitosis Segmentation (TL-Mit-Seg)-Hybrid-CNN based mitosis detection module (HCNN-Mit-Det); HCNN-Mit-Det-essemble; Transfer learning HCNN-Mit-DetTUPAC 16: F-measure: 66.7%
MITOS12 + MITOS14
F-measure: 65.1%
2018
[48]
ICIAR 2018Data Augmentation: 50 random color augmentations; different image scales-ResNet-50, InceptionV3 and VGG-16 networks from Keras distributionGradient boosted trees classifier2-class: 93.8%
4-class: 87.2%
2017
[51]
BreakHisData Augmentation
randomly distorted images, rotated and mirrored images
-Transfer learning
Google Inception v3
Deep convolutional neural network(CNN, ConvNet) model83% for benign class
89% for malignant class
2015
[65]
Private dataset of 100 malignant and nonmalignant breast histology images-Spatial-color-texture-based graph partitioning methodIntensity-texture features
Color texture features
-
2015
[66]
68 BCH images containing more than 3600 cells.Top-bottom hat transform Wavelet decomposition and multiscale region growing4 shape-based features and 138 textural features based on color spaces, wrapper feature selection algorithm based on chain-like agent genetic algorithm (CAGA)SVMNormal vs. malignant:
96.19 ± 0.31%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Labrada, A.; Barkana, B.D. A Comprehensive Review of Computer-Aided Models for Breast Cancer Diagnosis Using Histopathology Images. Bioengineering 2023, 10, 1289. https://doi.org/10.3390/bioengineering10111289

AMA Style

Labrada A, Barkana BD. A Comprehensive Review of Computer-Aided Models for Breast Cancer Diagnosis Using Histopathology Images. Bioengineering. 2023; 10(11):1289. https://doi.org/10.3390/bioengineering10111289

Chicago/Turabian Style

Labrada, Alberto, and Buket D. Barkana. 2023. "A Comprehensive Review of Computer-Aided Models for Breast Cancer Diagnosis Using Histopathology Images" Bioengineering 10, no. 11: 1289. https://doi.org/10.3390/bioengineering10111289

APA Style

Labrada, A., & Barkana, B. D. (2023). A Comprehensive Review of Computer-Aided Models for Breast Cancer Diagnosis Using Histopathology Images. Bioengineering, 10(11), 1289. https://doi.org/10.3390/bioengineering10111289

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop