Next Article in Journal
Where Robotic Surgery Meets the Metaverse
Next Article in Special Issue
Region Segmentation of Whole-Slide Images for Analyzing Histological Differentiation of Prostate Adenocarcinoma Using Ensemble EfficientNetB2 U-Net with Transfer Learning Mechanism
Previous Article in Journal
Pathogenic Variant Spectrum in Breast Cancer Risk Genes in Finnish Patients
Previous Article in Special Issue
Design of a Honey Badger Optimization Algorithm with a Deep Transfer Learning-Based Osteosarcoma Classification Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Bald Eagle Search Optimization with Synergic Deep Learning-Based Classification on Breast Cancer Imaging

by
Manar Ahmed Hamza
1,*,
Hanan Abdullah Mengash
2,
Mohamed K Nour
3,
Naif Alasmari
4,
Amira Sayed A. Aziz
5,
Gouse Pasha Mohammed
1,
Abu Sarwar Zamani
1 and
Amgad Atta Abdelmageed
1
1
Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, AlKharj 16242, Saudi Arabia
2
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Department of Computer Sciences, College of Computing and Information System, Umm Al-Qura University, Makkah 24211, Saudi Arabia
4
Department of Information Systems, College of Science & Art at Mahayil, King Khalid University, Muhayil 63311, Saudi Arabia
5
Department of Digital Media, Faculty of Computers and Information Technology, Future University in Egypt, New Cairo 11835, Egypt
*
Author to whom correspondence should be addressed.
Cancers 2022, 14(24), 6159; https://doi.org/10.3390/cancers14246159
Submission received: 1 November 2022 / Revised: 24 November 2022 / Accepted: 26 November 2022 / Published: 14 December 2022
(This article belongs to the Collection Artificial Intelligence and Machine Learning in Cancer Research)

Abstract

:

Simple Summary

The manual process of microscopic inspections is a laborious task, and the results might be misleading as a result of human error occurring. This article presents a model of an improved bald eagle search optimization with a synergic deep learning mechanism for breast cancer diagnoses using histopathological images (IBESSDL-BCHI). The performance validation of the IBESSDL-BCHI system was tested utilizing the benchmark dataset, and the results demonstrate that the IBESSDL-BCHI model has shown better general efficiency for BC classification.

Abstract

Medical imaging has attracted growing interest in the field of healthcare regarding breast cancer (BC). Globally, BC is a major cause of mortality amongst women. Now, the examination of histopathology images is the medical gold standard for cancer diagnoses. However, the manual process of microscopic inspections is a laborious task, and the results might be misleading as a result of human error occurring. Thus, the computer-aided diagnoses (CAD) system can be utilized for accurately detecting cancer within essential time constraints, as earlier diagnosis is the key to curing cancer. The classification and diagnosis of BC utilizing the deep learning algorithm has gained considerable attention. This article presents a model of an improved bald eagle search optimization with a synergic deep learning mechanism for breast cancer diagnoses using histopathological images (IBESSDL-BCHI). The proposed IBESSDL-BCHI model concentrates on the identification and classification of BC using HIs. To do so, the presented IBESSDL-BCHI model follows an image preprocessing method using a median filtering (MF) technique as a preprocessing step. In addition, feature extraction using a synergic deep learning (SDL) model is carried out, and the hyperparameters related to the SDL mechanism are tuned by the use of the IBES model. Lastly, long short-term memory (LSTM) was utilized to precisely categorize the HIs into two major classes, such as benign and malignant. The performance validation of the IBESSDL-BCHI system was tested utilizing the benchmark dataset, and the results demonstrate that the IBESSDL-BCHI model has shown better general efficiency for BC classification.

1. Introduction

Worldwide, the number of cancer cases is increasing at a faster rate than it ever has before. Multimodal medical imaging is utilized for diagnosing distinct kinds of cancers with the help of whole slide images (WSIs), MRIs, CT scans, and more [1]. The manual detection of cancer, with the help of imaging, was a time-consuming procedure, and it relied on the expertise of the consultant or doctor [2]. As a result, a high death rate is linked with late cancer detection, and a computer-aided diagnosis (CAD) technique which recognizes a tumor precisely within the time limitations has become necessary. Therefore, initial identification was the key factor to curing cancer [3]. The golden standard for determining the breast cancer (BC) prognosis was, until now, a pathological analysis. A pathological analysis generally acquires cancer samples via excision, puncture, and so on [4]. Hematoxylin binds to deoxyribonucleic acid (DNA) to highlight the nucleus, whereas eosin binds to proteins and emphasizes other frameworks. A precise prognosis of BC needs skilled histopathologists and consumes more effort and time to finish the task. Additionally, the diagnosis outcome of discrete histopathologists were the not same, and they mainly relied on the prior knowledge of the histopathologists [5]. This lead to an average diagnosis accuracy and a lower diagnosis consistency of 75%.
However, the study of the histopathological images (HIs) is a challenging and time-consuming task which requires professional expertise. Additionally, the analysis outcome can be affected through the experience level of the diagnosticians involved [6]. Thus, the computer-aided study of HIs serves a crucial role in BC diagnosis. However, the procedure of advancing the tools to perform the study has been hindered by the following difficulties: Firstly, the HIs of BC were finely grained, higher-resolution images which represent complex textures and rich geometric structures. The changes in a class and the consistency among the classes could cause the categorization to be highly complex, particularly for situations with many classes [7,8]. Secondly, we considered the constraints of feature extraction (FE) techniques for the HIs of BC.
Conventional FE approaches such as the gray-level co-occurrence matrix (GLCM) and scale invariant feature transform (SIFT) depend upon supervised information. In addition to that, earlier knowledge about the data was required for selecting the valuable features that cause the FE efficiency to be low and the computational load to be high [9]. Therefore, this might result in the final model generating the worst classification of outcomes. Deep learning (DL) methods are capable of extracting features automatically, restoring information from data mechanically, and studying enhanced abstract data representations [10]. It could resolve the issues of conventional FS, and it has been applied in computer vision (CV) successfully and also in biomedical science and in other domains.
This article presents a model of an improved bald eagle search optimization with a synergic deep learning mechanism for breast cancer diagnosis using histopathological images (IBESSDL-BCHI). The proposed IBESSDL-BCHI model follows image preprocessing using a median filtering (MF) technique as a preprocessing step. In addition, a feature extraction using a synergic deep learning (SDL) model was carried out, and the hyperparameters related to the SDL mechanism were tuned by the use of an IBES model. At last, the long short-term memory (LSTM) system was utilized for precisely categorizing the HIs into two major classes: benign and malignant. The performance validation of the IBESSDL-BCHI method was tested using the benchmark dataset. The key contributions of the paper are highlighted as follows:
  • An intelligent IBESSDL-BCHI technique comprising of MF-based pre-processing, SDL feature extraction, IBES-based parameter optimization, and an LSTM model for BC detection and classification using HIs is presented. To the best of our knowledge, the IBESSDL-BCHI model has never been presented in the literature.
  • A novel IBES algorithm is designed by the integration of oppositional-based learning with the traditional BES algorithm.
  • Hyperparameter optimization of the SDL model using the IBES algorithm using cross-validation helps to boost the classification outcome of the IBESSDL-BCHI model for unseen data.

2. Related Works

In the study that was conducted earlier [11], a new patch-related DL technique named Pa-DBN-BC was suggested for the detection and classification of BC on histopathology images using a Deep Belief Network (DBN). In this technique, the features can be derived via conducting supervised fine-tuned and unsupervised pre-training stages. The network automatically extracts the features from the image stains. In the literature [12], the researchers compared two ML techniques for the automatic classification of BC histology images as either malevolent or benevolent and their respective sub-classes. The initial technique was designed based on the abstraction of a group of handcrafted features that are encrypted with Bag of Words (BoW) and locality-constrained linear coding, and it was well-trained through an SVM classifier. The next method was designed based on a CNN model.
In the literature [13], the researchers suggested a method to use DL techniques with convolutional layers for the extraction of valuable visual features and classify the BC. It was revealed that such DL techniques can derive superior features in comparison with the handcrafted FS methods. It further suggests a new advanced strategy to achieve the primary objective. Further, the model can be effectively improved through the progressive merging of the DL methods with weak classifiers as a stronger classifier. Xie et al. [14] presented a new model for the analysis of HIs of BC through unsupervised and supervised deep CNN networks. At first, it adapted Inception_ResNet_V2 and Inception_V3 infrastructures to binary and multi-class problems of BC-HI classification with the help of Transfer Learning (TL) approaches.
In the study that was conducted earlier [15], the authors recommended a system for BC classification with an Inception Recurrent Residual (IRRCNN) method. The proposed IRRCNN is a powerful DCNN method since it combines the robustness of Recurrent RCNN, v4ResNet, and the Inception technique. The proposed IRRCNN method achieved better outcomes towards the equivalent networks, Inception Networks, and the RCNNs in terms of an object recognition task. Yang et al. [16] suggested to employ further regional-level supervision for BC classification of the HIs using the CNN technique. In this method, the RoIs were localized and utilized for guiding the interest of the classifier network concurrently. The presented supervised attention algorithm precisely stimulated the neurons in the diagnostic-related areas, whereas it suppressed the stimulations in the inappropriate and noisy regions.
Ali et al. [17] presented an effective DL model to exploit the small dataset and learn generalizable and domain-invariant representation in various medical imaging applications for diseases such as malaria, Diabetic Retinopathy, and tuberculosis. This model was named the Incremental Modular Network Synthesis (IMNS), and the resultant CNNs were the Incremental Modular Networks (IMNets). The authors in the study conducted earlier [18] developed a cloud-enabled Android app to detect breast cancer using the ResNet101 model. The proposed framework was cost-effective, and it demanded less human intervention as it was cloud integrated. So, a lower performance load was placed on the edge devices. Narayanan et al. [19] presented a novel Deep Convolutional Neural Network architecture for the Invasive Ductal Carcinoma (IDC) classification process.

3. The Proposed Model

In the current study, a new IBESSDL-BCHI method has been developed for the recognition and classification of BC using the HIs. The presented IBESSDL-BCHI method follows a series of processes, namely, MF-based noise removal, SDL feature extraction, IBES-based hyperparameter optimization, and LSTM classification. The design of the IBES algorithm helps in precisely categorizing the HIs into two major classes, namely, benign and malignant. Figure 1 depicts the workflow of the proposed IBESSDL-BCHI approach.

3.1. Image Preprocessing

Initially, the Median Filtering (MF) technique was utilized to preprocess the input HIs. MF is a nonlinear digital filter method that is frequently utilized in the removal of noise from images/signals. Such noise reduction is a classical pre-processing phase that is performed to enhance the outcomes in the later processes. The MF approach smoothens the HIs [20], and its steps are as follows:
Step1: The 3 × 3 kernel needs zero padding 3/2 = 1 column of 0′s at the left as well as the right edges, but it needs 3/2 = 1 row of 0′s at the upper as well as the bottom edges.
Step 2: To process the primary component, this approach covers 3 × 3 kernels with the center of them pointing at the initially handled component. The data, arranged in the kernel, were recorded with respect to the value, and the attained median value is obtained.
Step 3: We repeated the process for all of the elements until the final value was obtained.
The MF function calculates the median of every pixel in the kernel window, and the central pixel is interchanged with this median value. It can be extremely effectual in the extraction of salt-and-pepper noises. Notably, during the application of the Gaussian and box filters, the filter values to the central element remain a value that cannot occur in the original images. However, this is not the case in the MF approach since the central element is continuously exchanged with any of the pixel values of the images. This phenomenon decreases the noise in an efficient manner. The size of the kernel is a positive odd integer, and the median function is calculated as given in Equation (1).
M e d X = X n 2 X n 1 2 + X n + 1 2 2
Here, X refers to the orderly list of values from the dataset and n signifies the amount of values from the dataset.

3.2. SDL-Based Feature Extraction

After the image preprocessing, the SDL model was utilized to derive the feature vectors. During the feature extraction procedure, the pre-processed images were fed into the SDL module to obtain a beneficial set of feature vectors [21].
The SDL model extracts the feature subsets from the pre-processed images. It represents the S D L k through three main elements such as k DCNN component, the input layer and the C k 2 synergic network (SN). Every DCNN component of the network provides an independent learning representation in the input dataset. The SN consists of the FC architecture to ensure that the input layers belong to the same class, and it offers remedial comments. Afterward, the SDL system is classified into three sub-models. Figure 2 illustrates the architecture of the SDL network.

3.2.1. Components of DCNN

Due to the implicit nature of ResNet, ResNet-50 was exploited for initializing every DCNN component a = 1 , 2 ,   ,   n . Therefore, it can be indicated that the DCNN network comprises of VGGNet, AlexNet, and GoogLeNet which correspond to the SDL method. This module was trained using the data sequence X = x 1 ,   x 2 ,     ,   x M and a series of the last class label, Y = y 1 ,   y 2 ,     ,   y M . The aim is to progress with a group of variables θ which make sure that the CE loss is offered as follows:
l o g θ = 1 M a = 1 M b = 1 K 1 y 1 = b l o g e Z b a l = 1 K z l a
In Equation (2), n represents the class number and Z a = F x a ,   θ denotes the forward computation. The group of variables obtained for DCNN-a indicates that θ a and the variable do not assign any massive DCNN units.

3.2.2. SDL Model

The DCNN component, using the synergic labels of the pair of embedded and the input layers, is exploited for FC learning. Assuming that Z A ,   Z B are a data pair given as the input for two DCNN features (DCNNa, DCNNb) as follows,
f A = F Z A ,   θ a
f B = F Z B ,   θ a
Next, the deep feature from the whole dataset is embedded as f A B , and the outcomes using the synergic label are given below.
y s Z A ,   Z B = 1   i f   y A = y B 0   i f   Y A y B
To resolve the shortcoming, the percentage data pair from the class need to be higher. So, a simple-to-zero value is used to gauge the synergic signals using an alternate sigmoid layer, and the binary CE loss is as follows.
l S θ S = y S l o g y ^ s + 1 y S l o g 1 y ^ s
In Equation (6), θ S denotes the SN attribute and y ^ s indicates the SN forward computation. This validates that the input dataset pair belong to the same class, and it offers the option to remedy the synergic error.

3.2.3. Training and Testing Processes

Once the training is completed, the features of both the DCNN component and the SN become improved.
θ a z + 1 = θ a z η z . a θ S a z + 1 = θ S a z η z . S a , b
In Equation (7), η z and S a ,   b indicate the learning rate and SN between DCNNa and DCNNb, respectively, as given below.
a = l a θ a θ a + λ b = 1 , b a n l S a θ S a , b θ S a , b
S a = l S a θ S a , b θ S a , b
Here, λ denotes the trade-off between the sub-model of the classifiers and the synergic errors. The relationship between the trained process of the S D L 2 models increases. In the trained S D L k , the testing dataset x is classified using the DCNN unit, while it provides the prediction vector P a =   p 1 a ,   p 2 a ,   ,   p k a which is activated from the resulting FC layer. The class labels of the testing dataset are evaluated as follows.
y Z = argmax v { u = 1 k p 1 u , , u = 1 k p v u , , u = 1 k p K u

3.3. Hyperparameter Tuning Using IBES Algorithm

In this study, the hyperparameters related to the SDL mechanism are fine-tuned with the help of the IBES model. BES is a meta-heuristic optimization approach that imitates the behavior of bald eagle hunting [22]. This procedure has three phases, namely, selecting the space, searching in the space, and swooping. Initially, the bald eagles choose the best place in terms of the food amount. Next, the eagle searches for prey within the nominated place. In the optimally attained location in the previous stage, the eagle swoops to determine the optimal hunting site, which is the last phase.
a . Selection space: In this phase, a novel position is produced based on the subsequent formula.
P n e w i = P b e s t + α . r . P m e a n P i
In Equation (11), P n e w i denotes the i - t h recently produced location, P b e s t refers to the optimally attained location, P m e a n indicates the mean location, α represents a control gain [1.5, 2], and r indicates an arbitrary integer that lies in the range of [0, 1]. The fitness of every novel location is estimated; if the novel location P n e w offers a better fitness than the offered one P b e s t , then the novel location is allocated by P b e s t .
b . Searching in space: After the allocation of the optimal search space P b e s t is completed, the process upgrades the location of the eagles within the searching space. The update module is given herein.
P n e w i = P i + y i . P i P i + 1 + x i . P i P m e a n
In Equation (12), P n e w i denotes the i t h recently produced position, P m e a n indicates the mean location, and x and y denote the directional coordinates for the i t h   location as given below.
x i = x r i   max x r ; x r i = r i . s i n θ i y i = y r i max y r ; y r i = r i . c o s θ i
θ i = a . π . r a n d ; r i = θ i . R . r a n d
In Equation (13), a indicates a control variable that is utilized to determine the corner between the searching point and the central point, and it takes the values in the range of [5, 10]. R denotes a variable within [0.5, 2], and it is utilized to determine the number of searching cycles. The fitness of the novel position is estimated, and the P b e s t values are upgraded based on the attained outcomes.
c . Swooping: In this phase, the eagle moves towards the prey from the optimally attained location. The hunting model is given in the following expression.
P n e w i = r a n d . P b e s t + x 1 i . P i c 1 . P m e a n + y 1 i . P i c 2 . P b e s t  
In Equation (14), c 1 and c 2 denote two arbitrary integers that lie in the range of [1,2]; x 1 and y 1 indicate the directional coordinates that are determined as follows.
x 1 i = x r i   max x r ; x r i = r i . s i n h θ i y 1 i = y r i max y r ; y r i = r i . c o s h θ i
θ i = a . π . r a n d ; r i = θ i
Here, N p o p denotes the number of locations (population size), and M a x I t e r indicates the max   number of iterations.
The IBES model is derived by the inclusion of the Oppositional-Based Learning (OBL) concept to optimize the efficiency of BES. The OBL model was highlighted by Tizoosh et al. to estimate the individual fitness, and it relates to their equivalent opposite number after bringing the optimum one into the next iteration in the OBL approach, and it is determined as follows.
Opposite number: We assume that x is a real number and x l b ,   u b , the next the opposite number x ¯ , is provided by the subsequent value as shown in Equation (16).
x ¯ = u b + l b x
Here, l b and u b correspondingly denote the lower and upper boundaries, respectively.
Opposite vector: When x = x 1 , x 2 ,   x D , x 1 , x 2 , x D denote the real numbers and x l b ,   u b , and then x ¯ i is computed as given below.
x ¯ i = l b i + u b i x i .
At last, the current solution is located by x ¯ i , if f x ¯ < f x
The IBES method resolves the Fitness Function (FF) to obtain a superior classification performance. In this study, a reduced classifier error rate is treated as FF as given below.
f i t n e s s x i = C l a s s i f i e r E r r o r R a t e x i = n u m b e r   o f   m i s c l a s s i f i e d   s a m p l e s T o t a l   n u m b e r   o f   s a m p l e s 100

3.4. LSTM-Based Classification

During the image classification process, the LSTM model is used to precisely categorize the HIs under two major classes, namely, benign and malignant. Being a variant of the RNN model, the LSTM model basically differs from the classical ANN [23]. Both the LSTM and RNN are sequence-based methods with internal self-looped repeating networks. These determines the temporal relationship amid the sequential datasets and preserve the previous information.
In the current study, the repeated module has a simple framework ( T a n h layer). f t denotes the output of the forget gate a, for which the values lie in the range of [0, 1].
For the above explanation, the mathematical expression is given below.
f f = σ W f h t 1 , x t + b f
The next layer of the LSTM blocks are named as an ‘input gate’ layer as shown below.
i t = σ W i h t 1 , x t + b i  
C ˜ = ϕ W C h t 1 , x ζ + b C  
Afterwards, the older cell state C t 1 should be upgraded to the cell state, C t . The output of the forget gate f t is the decision to forget, and i r defines that a novel cell state has been added, i.e., C ˜ t . The update procedure of C t is described below.
C t = f t * C t 1 + i t C ˜ t  
At last, the interacting layer is named the ‘output gate’ layer. The procedure of producing an output of the LSTM block is demonstrated herein.
0 t = σ ( W 0 h t 1 , x t + b 0 ) * ϕ C t  
In Equation (23), 0 shows the activation function, namely, Sigmoid, and ϕ refers to the T a n h function. Given that θ = W ,   b characterizes the variable vector of the network, W = [ W f W i , W c , W o   | and b = b f ,   b i ,   b C ,   b o indicate the weight and bias, respectively. The forward formulation in Equations (20)–(23) is indicated by = N N X ; θ :
L θ L S T M = J N i = 1 N | N N x i ; θ y i | 2
In Equation (24), N indicates the overall number of labeled datasets. In the training course of LSTM, θ is tuned continuously by diminishing the loss function via an optimized technique, namely, SGD.

4. Results and Discussion

The proposed IBESSDL-BCHI method was experimentally validated using a benchmark Breast Cancer Histopathological Database (BreakHis) dataset [4] comprising 1820 HIs. The dataset holds a total of 588 images under the benign class and 1,232 images under the malignant class, and the details are given in Table 1. A few sample images are showcased in Figure 3.
Figure 4 illustrates a set of confusion matrices generated by the proposed IBESSDL-BCHI method on the test dataset. In run 1, the IBESSDL-BCHI model classified 92 images under class ‘A’, 233 images under class ‘F’, 110 images under class ‘PT’, 126 images under ‘TA’, 771 images under ‘DC’, 109 images under class ‘LC’, 165 images under ‘MC’, and 117 images under ‘PC’.
Table 2 and Figure 5 show the analytical outcomes of the IBESSDL-BCHI model during distinct test runs in terms of its accuracy ( a c c u y ), precision ( p r e c n ), recall ( r e c a l ) , specificity ( s p e c y ), F-score ( F s c o r e ), and G-mean ( G m e a n ). The experimental values infer that the proposed IBESSDL-BCHI method attained the maximum number of classification results under every run. For example, in run 1, the IBESSDL-BCHI technique attained the average a c c u y , p r e c n , r e c a l , s p e c y , F s c o r e , and G m e a n values which were 98.67%, 92.79%, 92.19%, 99.18%, 92.27%, and 95.55%, respectively. Additionally, in run 2, the proposed IBESSDL-BCHI approach reached the average a c c u y , p r e c n , r e c a l , s p e c y , F s c o r e and G m e a n values which were 99.48%, 97.22%, 97.29%, 99.68%, 97.20% and 98.46% correspondingly. In addition to these, in run 4, the IBESSDL-BCHI model accomplished the average a c c u y , p r e c n , r e c a l , s p e c y , F s c o r e , and G m e a n values which were 98.76%, 92.99%, 94.14%, 99.26%, 93.49%, and 96.66% correspondingly. Along with that, in run 5, the IBESSDL-BCHI methodology achieved the average a c c u y , p r e c n , r e c a l , s p e c y , F s c o r e , and G m e a n values which were 99.12%, 94.30%, 96.27%, 99.52%, 95.21% and 97.88% correspondingly.
Both the Training Accuracy (TA) and Validation Accuracy (VA) values obtained using the proposed IBESSDL-BCHI method using the test dataset are depicted in Figure 6. The outcomes demonstrate that the proposed IBESSDL-BCHI methodology achieved the highest TA and VA values, while the VA values were superior to the TA values.
Both the Training Loss (TL) and Validation Loss (VL) values attained by the proposed IBESSDL-BCHI methodology using the test data are depicted in Figure 7. The outcomes illustrate that the proposed IBESSDL-BCHI technique demonstrated minimal TL and VL values, while the VL values seemed to be smaller than the TL values.
A brief precision-recall inspection was conducted with the IBESSDL-BCHI method using the test data, and the results are depicted in Figure 8. It is to be noted that the proposed IBESSDL-BCHI approach obtained the maximal precision-recall performance under all of the classes.
A comprehensive ROC inspection was conducted on the proposed IBESSDL-BCHI system using the test dataset, and the results are portrayed in Figure 9. The outcomes show that the proposed IBESSDL-BCHI method depicted capability in categorizing the test dataset into dissimilar classes.
Table 3 provides the overall comparison analysis outcomes achieved by the proposed IBESSDL-BCHI method and other existing models [14,24]. Figure 10 portrays the comparative examination outcomes of the IBESSDL-BCHI technique and other techniques in terms of a c c u y . The figure implies that the proposed IBESSDL-BCHI system achieved enhanced a c c u y values. With respect to a c c u y , the IBESSDL-BCHI approach obtained a maximum a c c u y of 0.9963, whereas the rest of the methods such as the GLCM-KNN, GLCM-NB, GLCM-Discrete transform, GLCM-SVM, GLCM-DL, DL-INV3, and DL-INV2 models attained low a c c u y values which were 0.7617, 0.7845, 0.8500, 0.8500, 0.9244, 0.9471, and 0.8812, respectively.
Figure 11 demonstrates the comparative investigation outcomes attained by the proposed IBESSDL-BCHI approach and other techniques in terms of P r e c n , r e c a l , and F s c o r e . The figure reveals that the proposed IBESSDL-BCHI methodology produced maximum P r e c n , r e c a l , and F s c o r e values. With respect to p r e c n , the IBESSDL-BCHI method obtained a superior p r e c n value of 0.9829, whereas the other models such as the GLCM-KNN, GLCM-NB, GLCM-Discrete transform, GLCM-SVM, GLCM-DL, DL-INV3, and DL-INV2 systems obtained low p r e c n values which were 0.6240, 0.8216, 0.8356, 0.8732, 0.8689 0.8757, and 0.8170, respectively. Additionally, in terms of r e c a l , the proposed IBESSDL-BCHI system obtained a maximum r e c a l value of 0.9809, whereas the GLCM-KNN, GLCM-NB, GLCM-Discrete transform, GLCM-SVM, GLCM-DL, DL-INV3, and DL-INV2 techniques attained low r e c a l values which were 0.8360, 0.8345, 0.8166, 0.8761, 0.8024 0.8707, and 0.8144, respectively.
Eventually, with regard to F s c o r e , the proposed IBESSDL-BCHI methodology, it gained a superior F s c o r e value of 0.9818, whereas the GLCM-KNN, GLCM-NB, GLCM-Discrete transform, GLCM-SVM, GLCM-DL, DL-INV3, and DL-INV2 models attained low F s c o r e values which were 0.8222, 0.8697, 0.8469, 0.8162, 0.8792 0.8186, and 0.8642, respectively. From the detailed discussion about the results, it is evident that the proposed IBESSDL-BCHI technique yielded an effective breast cancer classification performance.

5. Conclusions

In this study, a new IBESSDL-BCHI method has been developed for both the recognition and classification of BC using HIs. The presented IBESSDL-BCHI model follows a series of processes, namely, MF-based noise removal, SDL feature extraction, IBES-based hyperparameter optimization, and LSTM classification. The design of the IBES algorithm aids in the precise categorization of the HIs into two major classes namely, benign and malignant. The performance of the proposed IBESSDL-BCHI mechanism was validated using a benchmark dataset, and the IBESSDL-BCHI model achieved a better general efficiency score for BC classification. Therefore, the presented model can be utilized for BC diagnosis over other models. In the future, the performance of the presented IBESSDL-BCHI algorithm can be enhanced by using an ensemble of DL models. In addition, the proposed model can also be tested on large scale real-time datasets to assure its robustness and scalability. Moreover, the computational complexity of the proposed model can be investigated in future.

Author Contributions

Conceptualization, H.A.M. and M.K.N.; methodology, M.A.H.; software, A.S.Z. and G.P.M.; validation, M.A.H., H.A.M. and N.A.; formal analysis, G.P.M.; investigation, M.K.N.; resources, A.S.A.A.; data curation, A.A.A.; writing—original draft preparation, H.A.M., M.K.N., M.A.H. and A.A.A.; writing—review and editing, N.A. and A.S.A.A.; visualization, A.A.A.; supervision, H.A.M.; project administration, M.A.H.; funding acquisition, H.A.M. and N.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University, grant number PNURSP2022R114 and Umm Al-Qura University, grant number 22UQU4310373DSR43.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated during the current study.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through General Research Project under grant number (40/43). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R114), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4310373DSR43).

Conflicts of Interest

The authors declare that they have no conflict of interest. The manuscript was written using contributions of all authors. All authors have given approval to the final version of the manuscript.

References

  1. Carvalho, E.D.; Filho, A.O.; Silva, R.R.; Araújo, F.H.; Diniz, J.O.; Silva, A.C.; Paiva, A.C.; Gattass, M. Breast cancer diagnosis from histopathological images using textural features and CBIR. Artif. Intell. Med. 2020, 105, 101845. [Google Scholar] [CrossRef] [PubMed]
  2. Han, Z.; Wei, B.; Zheng, Y.; Yin, Y.; Li, K.; Li, S. Breast Cancer Multi-classification from Histopathological Images with Structured Deep Learning Model. Sci. Rep. 2017, 7, 4172. [Google Scholar] [CrossRef] [PubMed]
  3. Yang, J.; Ju, J.; Guo, L.; Ji, B.; Shi, S.; Yang, Z.; Gao, S.; Yuan, X.; Tian, G.; Liang, Y.; et al. Prediction of HER2-positive breast cancer recurrence and metastasis risk from histopathological images and clinical information via multimodal deep learning. Comput. Struct. Biotechnol. J. 2022, 20, 333–342. [Google Scholar] [CrossRef] [PubMed]
  4. Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. Breast cancer histopathological image classification using Convolutional Neural Networks. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 2560–2567. [Google Scholar]
  5. Belsare, A.D.; Mushrif, M.M.; Pangarkar, M.A.; Meshram, N. Classification of breast cancer histopathology images using texture feature analysis. In Tencon 2015-2015 IEEE Region 10 Conference; IEEE: Manhattan, NY, USA, 2015; pp. 1–5. [Google Scholar]
  6. Krithiga, R.; Geetha, P. Breast Cancer Detection, Segmentation and Classification on Histopathology Images Analysis: A Systematic Review. Arch. Comput. Methods Eng. 2021, 28, 2607–2619. [Google Scholar] [CrossRef]
  7. Al Rahhal, M.M. Breast cancer classification in histopathological images using convolutional neural network. Breast Cancer 2018, 9, 64–68. [Google Scholar]
  8. Petushi, S.; Garcia, F.U.; Haber, M.M.; Katsinis, C.; Tozeren, A. Large-scale computations on histology images reveal grade-differentiating parameters for breast cancer. BMC Med. Imaging 2006, 6, 14. [Google Scholar] [CrossRef]
  9. Wang, P.; Wang, J.; Li, Y.; Li, P.; Li, L.; Jiang, M. Automatic classification of breast cancer histopathological images based on deep feature fusion and enhanced routing. Biomed. Signal Process. Control 2021, 65, 102341. [Google Scholar] [CrossRef]
  10. Ahmad, N.; Asghar, S.; Gillani, S.A. Transfer learning-assisted multi-resolution breast cancer histopathological images classification. Vis. Comput. 2021, 38, 2751–2770. [Google Scholar] [CrossRef]
  11. Hirra, I.; Ahmad, M.; Hussain, A.; Ashraf, M.U.; Saeed, I.A.; Qadri, S.F.; Alghamdi, A.M.; Alfakeeh, A.S. Breast Cancer Classification From Histopathological Images Using Patch-Based Deep Learning Modeling. IEEE Access 2021, 9, 24273–24287. [Google Scholar] [CrossRef]
  12. Bardou, D.; Zhang, K.; Ahmad, S.M. Classification of Breast Cancer Based on Histology Images Using Convolutional Neural Networks. IEEE Access 2018, 6, 24680–24693. [Google Scholar] [CrossRef]
  13. Vo, D.M.; Nguyen, N.-Q.; Lee, S.-W. Classification of breast cancer histology images using incremental boosting convolution networks. Inf. Sci. 2019, 482, 123–138. [Google Scholar] [CrossRef]
  14. Xie, J.; Liu, R.; Luttrell, J., IV; Zhang, C. Deep learning based analysis of histopathological images of breast cancer. Front. Genet. 2019, 10, 80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Alom, Z.; Yakopcic, C.; Nasrin, M.S.; Taha, T.M.; Asari, V.K. Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network. J. Digit. Imaging 2019, 32, 605–617. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Yang, H.; Kim, J.Y.; Kim, H.; Adhikari, S.P. Guided soft attention network for classification of breast cancer histopathology images. IEEE Trans. Med. Imaging 2019, 39, 1306–1315. [Google Scholar] [CrossRef]
  17. Ali, R.; Hardie, R.C.; Narayanan, B.N.; Kebede, T.M. IMNets: Deep Learning Using an Incremental Modular Network Synthesis Approach for Medical Imaging Applications. Appl. Sci. 2022, 12, 5500. [Google Scholar] [CrossRef]
  18. Chowdhury, D.; Das, A.; Dey, A.; Sarkar, S.; Dwivedi, A.D.; Mukkamala, R.R.; Murmu, L. ABCanDroid: A Cloud Integrated Android App for Noninvasive Early Breast Cancer Detection Using Transfer Learning. Sensors 2022, 22, 832. [Google Scholar] [CrossRef]
  19. Narayanan, B.N.; Krishnaraja, V.; Ali, R. Convolutional neural network for classification of histopathology images for breast cancer detection. In Proceedings of the 2019 IEEE National Aerospace and Electronics Conference (NAECON), Dayton, OH, USA, 15–19 July 2019; pp. 291–295. [Google Scholar]
  20. Patidar, P.; Gupta, M.; Srivastava, S.; Nagawat, A.K. Image De-noising by Various Filters for Different Noise. Int. J. Comput. Appl. 2010, 9, 45–50. [Google Scholar] [CrossRef]
  21. Mansour, R.F.; Althobaiti, M.M.; Ashour, A.A. Internet of Things and Synergic Deep Learning Based Biomedical Tongue Color Image Analysis for Disease Diagnosis and Classification. IEEE Access 2021, 9, 94769–94779. [Google Scholar] [CrossRef]
  22. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  23. Luo, X.; Zhang, D.; Zhu, X. Deep learning based forecasting of photovoltaic power generation by incorporating domain knowledge. Energy 2021, 225, 120240. [Google Scholar] [CrossRef]
  24. Reshma, V.K.; Arya, N.; Ahmad, S.S.; Wattar, I.; Mekala, S.; Joshi, S.; Krah, D. Detection of Breast Cancer Using Histopathological Image Classification Dataset with Deep Learning Techniques. BioMed Res. Int. 2022, 2022, 8363850. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Workflow of the proposed IBESSDL-BCHI methodology.
Figure 1. Workflow of the proposed IBESSDL-BCHI methodology.
Cancers 14 06159 g001
Figure 2. Architecture of SDL network.
Figure 2. Architecture of SDL network.
Cancers 14 06159 g002
Figure 3. Sample images.
Figure 3. Sample images.
Cancers 14 06159 g003
Figure 4. Confusion matrices of the proposed IBESSDL-BCHI approach; (a) Run 1, (b) Run 2, (c) Run 3, (d) Run 4, and (e) Run 5.
Figure 4. Confusion matrices of the proposed IBESSDL-BCHI approach; (a) Run 1, (b) Run 2, (c) Run 3, (d) Run 4, and (e) Run 5.
Cancers 14 06159 g004
Figure 5. Analytical results of the IBESSDL-BCHI approach during distinct runs.
Figure 5. Analytical results of the IBESSDL-BCHI approach during distinct runs.
Cancers 14 06159 g005
Figure 6. TA and VA analyses results of the IBESSDL-BCHI approach.
Figure 6. TA and VA analyses results of the IBESSDL-BCHI approach.
Cancers 14 06159 g006
Figure 7. TL and VL analyses results of the IBESSDL-BCHI methodology.
Figure 7. TL and VL analyses results of the IBESSDL-BCHI methodology.
Cancers 14 06159 g007
Figure 8. Precision-recall outcomes of the IBESSDL-BCHI approach.
Figure 8. Precision-recall outcomes of the IBESSDL-BCHI approach.
Cancers 14 06159 g008
Figure 9. ROC curve analysis results of the IBESSDL-BCHI approach.
Figure 9. ROC curve analysis results of the IBESSDL-BCHI approach.
Cancers 14 06159 g009
Figure 10. A c c u y analysis results of the IBESSDL-BCHI approach and other existing methodologies.
Figure 10. A c c u y analysis results of the IBESSDL-BCHI approach and other existing methodologies.
Cancers 14 06159 g010
Figure 11. Comparative analysis outcomes of the proposed IBESSDL-BCHI approach and other existing methodologies.
Figure 11. Comparative analysis outcomes of the proposed IBESSDL-BCHI approach and other existing methodologies.
Cancers 14 06159 g011
Table 1. Dataset details.
Table 1. Dataset details.
Total Number of Images = 1820
Class NamesLabelsNo. of Images
Benign
AdenosisA106
FibroadenomaF237
Phyllodes TumorPT115
Tubular AdenonaTA130
Total588
Malignant
CarcinomaDC788
Lobular CarcinomaLC137
Mucinous CarcinomaMC169
Papillary CarcinomaPC138
Total1232
Table 2. Analytical results of the IBESSDL-BCHI approach with distinct measures and runs.
Table 2. Analytical results of the IBESSDL-BCHI approach with distinct measures and runs.
Labels A c c u y , P r e c n R e c a l S p e c y F s c o r e G m e a n
Run 1
A98.0280.7086.7998.7283.6492.56
F98.9693.9598.3199.0596.0898.68
PT98.5783.9795.6598.7789.4397.20
TA99.5696.9296.9299.7696.9298.33
DC97.9797.4797.8498.0697.6697.95
LC98.1995.6179.5699.7086.8589.06
MC99.3495.3897.6399.5296.4998.57
PC98.7498.3284.7899.8891.0592.02
Average98.6792.7992.1999.1892.2795.55
Run 2
A99.5699.0093.4099.9496.1296.61
F98.6398.6290.7299.8194.5195.16
PT99.7896.64100.0099.7798.2999.88
TA100.00100.00100.00100.00100.00100.00
DC99.5699.1299.8799.3299.4999.60
LC99.1891.2298.5499.2394.7498.88
MC99.8498.8299.4199.8899.1299.64
PC99.2994.3396.3899.5295.3497.94
Average99.4897.2297.2999.6897.2098.46
Run 3
A99.6798.0896.2399.8897.1498.04
F99.6299.1597.8999.8798.5198.88
PT99.6295.0099.1399.6597.0299.39
TA99.7397.7198.4699.8298.0899.14
DC99.0798.6199.2498.9398.9299.09
LC99.8999.2799.2799.9499.2799.60
MC99.89100.0098.82100.0099.4099.41
PC99.5698.5195.6599.8897.0697.74
Average99.6398.2998.0999.7598.1898.91
Run 4
A98.5785.0991.5199.0188.1895.18
F98.4695.6392.4199.3793.9995.82
PT99.0189.4395.6599.2492.4497.43
TA99.4093.4398.4699.4795.8898.96
DC97.8098.0796.8398.5597.4597.68
LC98.0884.4691.2498.6387.7294.87
MC99.6299.3996.4599.9497.9098.18
PC99.1898.4390.5899.8894.3495.12
Average98.7692.9994.1499.2693.4996.66
Run 5
A99.1893.3392.4599.5992.8995.96
F99.5198.3197.8999.7598.1098.81
PT98.5283.3395.6598.7189.0797.17
TA99.2392.0397.6999.3594.7898.52
DC98.3599.7496.4599.8198.0698.11
LC99.3493.1098.5499.4195.7498.97
MC99.4598.1895.8699.8297.0197.82
PC99.4096.3595.6599.7096.0097.66
Average99.1294.3096.2799.5295.2197.88
Table 3. Comparative analysis outcomes of the IBESSDL-BCHI approach and other existing approaches using different measures [14,24].
Table 3. Comparative analysis outcomes of the IBESSDL-BCHI approach and other existing approaches using different measures [14,24].
Methods A c c u y , P r e c n R e c a l F s c o r e
GLCM-KNN Model0.76170.62400.83600.8222
GLCM-NB Model0.78450.82160.83450.8697
GLCM-Discrete transform0.85000.83560.81660.8469
GLCM-SVM Model0.85000.87320.87610.8162
GLCM-DL Model0.92440.86890.80240.8792
Deep Learning-INV30.94710.87570.87070.8186
Deep Learning-IRV20.88120.81700.81440.8642
IBESSDL-BCHI0.99630.98290.98090.9818
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hamza, M.A.; Mengash, H.A.; Nour, M.K.; Alasmari, N.; Aziz, A.S.A.; Mohammed, G.P.; Zamani, A.S.; Abdelmageed, A.A. Improved Bald Eagle Search Optimization with Synergic Deep Learning-Based Classification on Breast Cancer Imaging. Cancers 2022, 14, 6159. https://doi.org/10.3390/cancers14246159

AMA Style

Hamza MA, Mengash HA, Nour MK, Alasmari N, Aziz ASA, Mohammed GP, Zamani AS, Abdelmageed AA. Improved Bald Eagle Search Optimization with Synergic Deep Learning-Based Classification on Breast Cancer Imaging. Cancers. 2022; 14(24):6159. https://doi.org/10.3390/cancers14246159

Chicago/Turabian Style

Hamza, Manar Ahmed, Hanan Abdullah Mengash, Mohamed K Nour, Naif Alasmari, Amira Sayed A. Aziz, Gouse Pasha Mohammed, Abu Sarwar Zamani, and Amgad Atta Abdelmageed. 2022. "Improved Bald Eagle Search Optimization with Synergic Deep Learning-Based Classification on Breast Cancer Imaging" Cancers 14, no. 24: 6159. https://doi.org/10.3390/cancers14246159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop