Next Article in Journal
AIDM-Strat: Augmented Illegal Dumping Monitoring Strategy through Deep Neural Network-Based Spatial Separation Attention of Garbage
Next Article in Special Issue
A Hybrid Generic Framework for Heart Problem Diagnosis Based on a Machine Learning Paradigm
Previous Article in Journal
An Improved Density Peak Clustering Algorithm for Multi-Density Data
Previous Article in Special Issue
Hypertension Diagnosis with Backpropagation Neural Networks for Sustainability in Public Health
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mammogram Image Enhancement Techniques for Online Breast Cancer Detection and Diagnosis

by
Daniel S. da Silva
1,
Caio S. Nascimento
1,
Senthil K. Jagatheesaperumal
2 and
Victor Hugo C. de Albuquerque
1,*
1
Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
2
Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi 626005, TN, India
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(22), 8818; https://doi.org/10.3390/s22228818
Submission received: 24 October 2022 / Revised: 9 November 2022 / Accepted: 10 November 2022 / Published: 15 November 2022

Abstract

:
Breast cancer is the type of cancer with the highest incidence and global mortality of female cancers. Thus, the adaptation of modern technologies that assist in medical diagnosis in order to accelerate, automate and reduce the subjectivity of this process are of paramount importance for an efficient treatment. Therefore, this work aims to propose a robust platform to compare and evaluate the proposed strategies for improving breast ultrasound images and compare them with state-of-the-art techniques by classifying them as benign, malignant and normal. Investigations were performed on a dataset containing a total of 780 images of tumor-affected persons, divided into benign, malignant and normal. A data augmentation technique was used to scale up the corpus of images available in the chosen dataset. For this, novel image enhancement techniques were used and the Multilayer Perceptrons, k-Nearest Neighbor and Support Vector Machines algorithms were used for classification. From the promising outcomes of the conducted experiments, it was observed that the bilateral algorithm together with the SVM classifier achieved the best result for the classification of breast cancer, with an overall accuracy of 96.69% and an accuracy for the detection of malignant nodules of 95.11%. Therefore, it was found that the application of image enhancement methods can help in the detection of breast cancer at a much earlier stage with better accuracy in detection.

1. Introduction

It is reported that almost 24.2% of women with cancer in the world are affected by breast cancer each year, and 15% of deaths of female cancer-affected persons are from patients with this type of cancer [1]. According to the World Health Organization (WHO), 2.3 million women were diagnosed with breast cancer in 2020, and a mortality of 685,000 was reported [2]. In addition, as per the study in [3], it is anticipated that breast cancer cases will be increasing over the years and reach around 27 million in 2030.
Some patients have breast cancer cells but are asymptomatic, so frequent screening plays a vital role in detecting breast cancer before it becomes worse and progresses to subsequent stages [4]. To detect breast cancer, imaging tests can be used as well as biopsy, however, the biopsy is considered an invasive method, while imaging tests are more conservative [5].
Among the imaging tests, the most commonly used breast cancer detection techniques are mammogram, ultrasound and MRI [6,7]. Although imaging tests are fundamental in the identification and diagnosis of breast cancer, on several occasions, the tests are affected by noise, low contrast and other factors that impair the images and make it difficult to diagnose efficiently [8,9].
Due to the challenges encountered in the early stage of accurate diagnosis of breast cancer, industry and academia have been involved in active research with the aim of proposing computational tools capable of performing a diagnosis automatically [10].
The concept of computer-aided diagnosis (CAD) is being increasingly used to assist in the medical analysis of various diseases, such as, for example, vertebrae segmentation [11], diagnosis of Parkinson’s disease [10,12], perception of the dynamics of blood flow to from static CT angiography images [13], detection and classification of multiclass skin lesions via teledermatology [14], EEG-based BCI rehabilitation [15], recognition and detection of atrial fibrillation [16], detection of pulmonary nodules on CT scans [17], classification of oral cancer [18], as well as addressing security and efficient authentication for IoT applications in the medical field. In addition, research has been conducted on ways to improve the images [19,20,21], segment the parts of interest [22,23,24], and classify the nodes [25,26,27].
Thus, this research has the general objective of developing a robust platform to compare and evaluate image improvement methods in breast ultrasounds, classifying them into benign, malignant, and normal images. For this, we sought to apply novel methods of image enhancement; compare the methods applied; evaluate their performance through quality metrics; classify the images into three groups: normal, benign and malignant; and, finally, evaluate and validate the models used.
The main contributions of this research can be observed below:
  • Development of a comparative analysis on the evaluation of breast cancer image enhancement methods to improve the accuracy in the detection of malignant tumors;
  • Comparing different image enhancement techniques and classification techniques focused on breast tumor;
  • Validating the results through statistical evaluations and estimating a better strategy for pre-screnning of tumors;
  • Providing an online processing tool for breast cancer detection for early diagnosis and treatment.
Therefore, this work is structured in four sections. The first section deals with the introduction, addressing the contextualization, issues, objectives, and contributions. Then, the second section presents the methods incorporated for the development of the research. The third section addresses the results obtained in relation to image improvement and classification. Finally, the last section summarizes the work with conclusions and notes on future work.

2. Materials and Methods

In this section, the application steps of the proposed system will be presented, covering the web platform, the image enhancement algorithms, the feature extraction method, the classifiers, as well as the validation metrics, and the experimental settings used in this study. Figure 1 shows the sequence of stages involved in the automated pre-screening of breast tumors using the proposed clinical decision-making system.
From the perspective of providing an interactive environment for screening tumor-affected patients, the proposed Web platform was built using HTML 5 and CSS 3 for the front-end interactive part with users, and the Flask micro-framework as a fast and lightweight development tool for the back-end design [28,29]. It is responsible for performing all the image enhancement processes, feature extraction [30], classification and metrics calculation, which will be discussed in the sections below.
Considering the functioning of the platform, the acquisition will be performed from the user uploads that include the breast ultrasound images with either png, jpg, jpeg or gif extensions and channelize them for cloud processing. The enhanced image features, the generated images, as well as their associated metrics, will be archived in a folder for further analysis and possible improvement in the training phases of the presented classification algorithms [31].
Subsequently followed by the enhancement, the generated images will be processed by the feature extractors and classified as normal, benign or malignant, and the metrics related to these processes will also be recorded in variables in the back end. Once all the results are calculated, the image storage path and the metric values are stored in a JSON file, which will be tracked by the front-end Flask framework, where the user can choose their visualization of interest.

2.1. Database Description

The chosen dataset of breast ultrasound images used in this study was proposed in [32], which includes data from 600 female patients aged between 25 and 75 years. The dataset comprises a total of 780 gray scale images separated into three classes: normal, containing a total of 133 images; benign, containing 437 images; and, malignant, containing 210 images. The images present in the dataset have an average dimension of 500 × 500 pixels, in PNG format. Figure 2 shows examples of ultrasound images for each class, normal image, image with benign nodule, and image with nodule and malignant. Table 1 lists the statistics on the count of the number of images in each category considered for the analysis.

Data Augmentation

In order to increase the number of base images, the Data Augmentation technique was used. The transformations used for this analysis were shear, rotation, horizontal translation (x-axis) and vertical translation (y-axis), with their corresponding values in degrees considered for transformation as 15 ° , 15 ° , 10 ° and 10 ° , respectively. For every individual image, the type of transformation used is randomly chosen and they were performed thrice. Thus, the base dataset is augmented with triple the number of images of various transformations, and the corpus of images in the dataset will contain both the original images and the augmented images with applied transformations.

2.2. Image Enhancement Techniques

The methods used for image enhancements are categorized into traditional methods and methods based on Deep Learning, as shown below.

2.2.1. Bilateral

The two-sided method proposed by [33], provides a traditional, iterative, local and simple strategy that smoothes images in order to preserve edges, through a non-linear combination of values from nearby images. The bilateral method combines shades of gray or colors based on their geometric proximity and photometric similarity, preferring close values to distant values, both in range and subject, to be within the domain.

2.2.2. Histogram Equalization

Histogram equalization (HE) is another traditional method of image enhancement. Generally, histogram equalization increases the overall contrast of the images, especially when the used image data are represented by close contrast values, thus, it is a non-linear extension of the image, where it redistributes the pixel values and, within of a certain range of grayscale, the number of pixels is almost the same [34].

2.2.3. Total Variance

The total variance (TV) method proposed by [35] is treated as another traditional image enhancement method. Here, the authors propose a new algorithm for minimization and total variance applications. It is treated as a very fast method and can be used to solve the challenges in noise reduction and zooming, in addition to ensuring better proof of convergence.

2.2.4. Low-Light Image Enhancement via Illumination Map Estimation

The Low-Light Image Enhancement via Illumination Map Estimation (LIME) method is a simple low-light image enhancement method proposed by [36]. In this method, the illumination of each pixel is initially estimated individually, in order to find the maximum value for each component of the image, the R, G and B. The initial illumination map was refined by imposing a previous structure on it, such as the final lighting map, and thus, with a well-built lighting map, enhancement can be achieved [36].

2.2.5. Exposure Fusion

In the work by Ying et al., in [37], the authors developed a new image contrast enhancement algorithm using an exposure fusion framework. This method is used for low-light images, where a weight matrix is designed for image fusion using lighting estimation techniques. With the model proposed by the authors, the camera response is used to synthesize multiple exposure images and, subsequently, it is used to estimate the best exposure ratio, such that the synthetic image is more exposed in the regions where the input image was underexposed. Thus, the input image and synthetic image are merged according to the weight matrix to achieve the expected enhancement result [37].

2.2.6. Gamma Correction

The method proposed by the authors in [38] addresses contrast enhancement of glow-distorted images by enhanced adaptive gamma correction. The authors developed an improved adaptive gamma correction technique that utilizes a new negative image strategy to perform image contrast of bright images, employing truncated cumulative distribution function modulated gamma correction to enhance the faint ones, thus, distortion in the structure and challenges in local enhancement can be alleviated effectively.

2.2.7. Light-DehazeNet

In the work of [39], a new lightweight CNN architecture for image defogging is proposed. The Light-DehazeNet (LD-Net) jointly estimates both the transmission map and atmospheric light using a transformed atmospheric scattering model. In breast ultrasound images with high-density impulse noise, the LD-Net ensures to realize quick and effective means of image denoizing.

2.2.8. Zero-Reference Deep Curve Estimation

The paper proposed by [40] presents a Zero-Reference Deep Curve Estimation (Zero-DCE) as a new method that formulates light enhancement strategy as one of the image-specific curve estimation tasks using a deep neural network. It is highly instrumental in pixel-wise and high-order curve estimation and assists in adjusting the dynamic range of the input image. Further, the non-reference loss functions used for implementing the Zero-DCE help to realize an intuitive and simple nonlinear curve mapping.

2.2.9. Low-Light Image Enhancement with Normalizing Flow

The research proposed by [41] addresses a new flow-based low-light image enhancement with normalizing flow method to accurately learn global image properties as well as local pixel correlations by modeling distributions over images normally exposed. It establishes the mapping relationship of low-light images from one to many by considering the conditional distribution. This LLFLOW technique helps to achieve enhanced images with better illumination, rich colors, and less noise, as well as artifacts.

2.3. Data Extraction

CNNs have been widely used for various purposes and applications [42,43,44,45]. To extract features from the images, we used the Resnet CNN proposed by the authors in [46], which possesses a residual learning structure that facilitates the training of substantially deeper networks. This type of network has greater ease of optimization and can gain accuracy even with considerably greater depth. Several methods have been developed based on Resnet, such as [47,48,49,50], which were observed to eliminate noise inference in the images, and assist in identifying even complex features with better accuracy.
In terms of the Resnet architecture, the vast majority of convolutional layers have 3 × 3 filters and follow two basic rules. Firstly, the layers have the same number of filters for the same output feature map size. Secondly, the number of filters is doubled if the feature map size is reduced by half, to preserve per-layer time complexity. Subsequently, the downsampling is performed directly by convolutional layers. It possesses a global averaging pool layer and a 1000-way fully connected layer with softmax, with a total of 34 weighted layers [46].

2.4. Classification Methods

For this study, the fully connected layer of the network was eliminated and only the core Resent50 was used for performing the feature extraction. Subsequently, to classify the data extracted by Resnet50, the MLP, SVM and KNN classifiers were used.

2.4.1. Multi-Layer Perceptron

MLP is an unsupervised learning technique that consists of an artificial neural network based on forward-feeding biological neurons that has three types of layers, the input layer to collect the input data, the output layer that gives the decision over the input data, and the hidden layer, which lies between the input layer and the output layer [51,52].
At least one hidden layer is added to the MLP, and there can be numerous hidden layers. With the exception of the input layer, each node is a neuron with a nonlinear activation function [53].

2.4.2. Support Vector Machine

The support vector machine (SVM) is based on the statistical learning theory [54]. SVM is a supervised machine learning technique that builds a set of hyperplanes in a high-dimensional space, and good separation of the hyperplanes is achieved when it is obtained based on the greatest distance to the closest training data point of any class [55].

2.4.3. k-Nearest Neighbor

The k-Nearest Neighbor algorithm is a supervised machine learning regression and classification technique [56]. The idea of kNN is that the point k closest to the sample to be tested is usually found through the Euclidean distance [57]. If most of the points k belong to the same category as the sampled, this sample will be classified in the same class [58].

2.5. Statistical Metrics

The metrics used for analysis were both to evaluate the quality of the images, as well as to evaluate the classification accuracy.

2.5.1. Quality Metrics

The quality of assessment process was ensured through several aspects that can be observed through the following metrics:
  • RMSE: The Root Mean Square Error (RMSE) is a metric that considers the number of errors between two sets of data. In this metric, the closer to zero, the more accurate the observed forecast results [59]. Thus, a comparison will be made with the two breast ultrasound images, the original and the image after applying the improvement algorithms.
    R M S E = i = 1 n P i O i 2 n
    where P i is the actual value of the data, O i is the predicted value, n is the number of data and is the total number of values.
  • CNR: The contrast-to-noise ratio (CNR) is a metric to measure the contrast of images. It enables us to analyze the difference in contrast between the nodules and the other regions in the breast ultrasound images [60].
    CNR = | μ i μ o | σ i 2 + σ o 2 ,
    where,
    σ i = E { ( s i 2 μ i ) 2 } ,
    σ o = E { ( s o 2 μ 0 ) 2 } ,
    which are the variation of signal strength inside and outside the target area, respectively.
  • AMBE: The Absolute Mean Brightness Error (AMBE) is a metric that evaluates the difference between the average intensity level of the enhanced ultrasound image and the average intensity level of the original image [61].
    A M B E = I ( y ) I ( x )
    where I ( y ) is the average intensity level of the enhanced image and I ( x ) is the average intensity level of the original image.
  • AG: The Average Gradient (AG) is a metric that represents the clarity of the breast ultrasound image, reflecting the image’s ability to express contrast details between the nodule and the other regions [62].
    A G = 1 M x N i = 1 M j = 1 N ( f x ) 2 + ( f y ) 2 2 ,
    where M and N are the width and height of the image, ( f x ) and ( f y ) refers to the horizontal and vertical gradients.
  • PSNR: Peak Signal-to-Noise Ratio (PSNR) is a metric that evaluates the relationship between the maximum value of the measured signal and the amount of noise that affects the signal of breast ultrasound images [59].
    P S N R = 20 l o g 10 M A X f M S E
    where M A X f is the maximum value and M S E is the result of the RMSE.
  • SSIM: The Structural Similarity Index (SSIM) is one of the quality assessment metric used to measure the visual changes and similarity between two images, by performing quality assessment and comparing the structural characteristics, which is described through the structural similarities [59]. In this way, it helps to analyze the similarity between the original breast ultrasound image and the image after applying the image improvement algorithm.
    S S I M ( x , y ) = 2 μ x μ y + c 1 2 σ x y + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2
    where, with an image I ( x , y ) and μ x being the average value for x or luminance x, μ y being the average value for y or luminance y, σ y the contrast value for y, σ x the contrast value for x, c 1 and c 2 being two variables used to stabilize the division if the divisor is 0.

2.5.2. Rank Metrics

The classification of clinical data presents a few challenges, among them, two of which are primary ones to be addressed. The first is the unbalanced dataset, where a greater number of cases have negative diagnoses than positive diagnoses. In the second one, the main interest is to accurately classify the positive case of the disease, since false positives do not cause great damage while false negatives can result in a delay in treatment, consequently increasing the difficulty of an early diagnosis.
To measure the accuracy of the method, the confusion matrix is used as a basis, using the evaluation metrics of True Positive (VP), False Positive (FP), True Negative (VN) and False Negative (FN).
As this search has three classes, benign, malignant and normal, the confusion matrix can be described as shown in Figure 3.
In this way, we have:
  • True Positive class Benign (VB): VB occurs when in the actual dataset, class Benign was correctly predicted as class Benign.
  • True Positive Malignant class (VM): The VM occurs when in the actual dataset, the Malignant class was correctly predicted as the Malignant class.
  • True Positive Normal class (VN): The VN occurs when in the actual dataset, the Normal class was correctly predicted as the Normal class.
  • False Negative (FN): FN occurs when in the actual data set, the class we are trying to predict was predicted incorrectly. That is, when it was supposed to be cancer and was diagnosed as non-cancer.
  • False Positive (FP): FP occurs when in the actual dataset, the class we are trying to predict was predicted incorrectly. That is, when it was supposed to be non-cancer and was diagnosed as cancer.
For this, five metrics were used to evaluate the results considering these questions, namely: Accuracy (AccGlobal), F1-score (F1score), benign class hit rate (Benign), malignant class hit rate (Malignant), and normal class hit rate (Normal).
  • Accuracy: This is the general probability of success, which shows the global success rate considering the analyzed classes. Thus, it takes into account the hits of the three classes under all hits and misses.
    A c c G l o b a l = V B + V M + V N V B + V M + V N + F N + F P
  • F1-score: It is the harmonic average between precision and recall. It is a commonly used metric to assess unbalanced data.
    F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
  • Hit rate of benign class (Benign): This is the probability that a patient who has a positive diagnosis for benign actually has a benign nodule.
    B e n i g n = V B V B + F N B + F N B
  • Hit rate of Malignant class: This is the probability that a patient who has a positive diagnosis for malignant actually has a malignant nodule.
    M a l i g n a n t = V M V B + F P M + F N M
  • Hit rate of the Normal class (Normal): This is the probability that a patient who has a negative diagnosis for nodules actually does not have nodules.
    N o r m a l = V N V N + F P N + F P N

2.6. Experimental Configuration

Initially, a Resnet50 (a CNN with 50 layers deep neural network) was implemented as a feature extractor, learning intrinsic patterns from images to identify breast cancer. For classification, the algorithms MLP, SVM and kNN were used. The dataset extracted by Resnet50 was divided into training data using k-fold Cross Validation with 20 fields. Hyperparameters were optimized through grid search. Relative to kNN, the number of neighbors ∈ [3, 10] and leaf size ∈ [10, 50]. The SVM parameters, the γ [ 2 15 , 2 1 ] and C [ 2 5 , 2 5 ] . As for MLP, the parameters were the number of hidden layers ∈ [1:5], the number of neurons per hidden layer ∈ [50:500], α ∈ [000001.1], and learning rate ∈ [000001.0.9999].
The experiments were performed on a computing terminal with Windows 11 Operating System (a new version from Microsoft Corporation), i7 11800H processor, 8-core, 24Mb Cache, 4.6GHz, 11th generation, 16Gb of DDR4 RAM, 3200 MHz, and NVIDIA Geforce RTX 3060 video card (a Graphical Processing Unit from NVIDIA Corporation), 6GB GDDR6.

3. Results and Discussion

3.1. Image Enhancement

With the application of the enhancement algorithms on the ultrasound images, the processing duration of each approach was summarized as shown in Table 2. It is evident from the statistics that the Bilateral and LIME algorithms consume much longer processing time compared to the other algorithms, particularly the Bilateral with 1344 min. The Gamma correction method was the one that consumes the shortest processing time of 2.8 min.
From the perspective of the processing times of the algorithms, the metrics presented in Table 3 is used to assess the quality of the obtained images. It can be seen that in relation to the RMSE metric and the AMBE metric, the algorithm that obtained the best results was the TV with 0.0227 and 0.0017, respectively. The bilateral algorithm achieved a result close to the TV, with RMSE of 0.0329 and AMBE of 0.0063. The HEP and LIME algorithms achieved inferior results both in the RMSE metric and in the AMBE metric among the other algorithms.
Regarding the CNR metric, the algorithms that had the highest contrast–noise ratio were the HE and LIME methods, while the lowest contrast–noise ratio were the TV and Bilateral methods.
Analyzing the AG metric, which evaluates the change in the intensity of pixel values, it can be seen that the smallest change was for the Bilateral and LIME methods, both with 0.0, while the Ying method was the one that caused the greatest change in intensity. When analyzed from the perspective of the PSNR metric, the TV method achieved the best result with 81.0792, followed by the Bilateral method with 77.8054. The Gamma algorithm achieved the worst result compared to the others, with 36.7292.
With the conception of the SSIM metric, the Ying algorithm achieves the closest result to the original image with 0.9394, while LDNet being the most distant method with 0.3705, followed by LIME with 0.3778. The Gamma, Bilateral, TV and LLFLOW methods achieved the SSIM value above 0.80.
Considering all the metrics, the TV method achieved a good performance in three of the seven analyzed metrics. The Bilateral method achieved the best result in the AG metric and also achieved good results in the RMSE, AMBE and PSNR metrics.
With the application of image enhancement algorithms, it can be seen in Figure 4, Figure 5 and Figure 6 the elimination of some noise that presents the same characteristics of the region of interest. It is noted that the TV and Bilateral algorithms, which presented better results in quality metrics, have images with less noise compared to the other algorithms, which can facilitate better classification accuracy.

3.2. Classification

From the data extracted with Resnet50, the results were obtained using the MLP, kNN and SVM algorithms for the metrics of global accuracy (ACC Global), benign success rate (Benign), malignant success rate (Malignant), hit rate of normals (Normal) and F1-score. Furthermore, presented in Table 4 are the training and testing times for each algorithms for image enhancement, considering the MLP, kNN and SVM classifiers.
It is worth noting that the kNN classifier provides the shortest training time in relation to the other algorithms. Further, the combination of the HE enhancement approach with the kNN achieved the shortest training time. The MLP classifier, despite being the one that takes more training time, in relation achieves the shortest test times. Moreover, it is observed that the TV method in association with the MLP achieved the shortest test time.
In Table 5 are presented the metrics for each image enhancement algorithm, as well as for the original base. Analyzing the enhancement algorithms for each classifier individually, we observe that for the MLP classifier, the algorithm that obtained the best accuracy result was the TV algorithm with 95.54%, being considered statistically similar to the Bilateral method that reached an accuracy rate of 95.54%.
For the kNN classifier, the algorithm that obtained the best results in terms of both accuracies and hit rate of images with malignant nodules (malignant) was LDNET with 83.23% and 76.90%, respectively. For the SVM classifier, the algorithm that obtained the best global accuracy was the Bilateral one with 96.69%. The TV algorithm delivers an accuracy rate of 96.66%, by considering the Wilcoxon method, which is statistically equal to the Bilateral method.
Analyzing in general, considering the best combination between the image improvement algorithm and classifier, the one that delivers the better results with balanced trade-offs was the Bilateral algorithm in association with the SVM classifier.
Although global accuracy is a valid metric, the accuracy rate in the classification of images with malignant nodules (malignant) is fundamental for this analysis, as it is more clinically relevant. Thus, it can be seen that the Bilateral algorithm with the SVM classifier for the malignant metric achieved the best result among all combinations, with 95.11%.
Taking into account the training and testing times along with their the hit metrics, it is clear that despite the kNN training time being the shortest, it was the algorithm that delivers the inferior results for all image enhancement methods. The SVM that achieved neither the shortest, nor the longest training and test times assists in providing better classification accuracy.
Considering only the best combination that was targeted for achieving the best result of global and malignant accuracy, Figure 7 shows the confusion matrix for the combination of the bilateral method with the SVM classifier.
It is evident from the analysis that the benign class had 1701 correctly classified, while the malignant and normal had 799 and 517, respectively. The benign class possesses more errors than the malignant class, as these two classes contain nodules in their images. A similar situation occurred with the malignant class as well, as there were more errors in the benign class. In the normal class, there are more errors than in the benign class, which could also be observed in comparison with the normal class, which comes very close to the error of the malignant class that has the nodule.
One of the possible explanations is that, although normal images do not have nodules, there are still regions in the ultrasound image that may have similarities with nodules. The characteristics of benign nodules are that they are well-defined and regular, whereas malignant nodules are larger and asymmetrical in shape. Thus, it may be observed that some of the regions in the normal images resemble the nodules in the benign images.
In Figure 8, it is possible to verify in (a) that the region marked in red has a similarity with the benign nodule, even though it has a lighter texture. In Figure 8b,c, one can see the shapes of both benign and malignant nodules through their visual differences.
Analyzing the complete confusion matrix, it is noted that there were only 48 false negatives and 15 false positives. These false negatives and false positives are the ones that can cause the most inconvenience and delays in medical treatments and diagnoses, so minimizing these false positives is of great importance. Thus, observing Table 3, which presents the image quality metrics, and Table 5, which presents the classification metrics, it is clear that the algorithms that obtained better results in the metrics of quality were the ones that achieved the best results in the ranking metrics. The TV method and the Bilateral method stood out in terms of the quality metrics and achieved reasonable rates in the classification.
More stability and consistency in the results of the proposed image enhancement techniques were proved with better precision in the earlier stage detection of breast cancers. Particularly, the novel approaches in the pre-processing stages of the breast ultrasound image using different image enhancement algorithms guarantees stabilized outcomes on the chosen dataset. Therefore, it can be verified that good pre-processing, making improvements in the images, can considerably benefit the classification results.

3.3. Online System/Web Interfaces

The online platform for automatic pre-screening of breast tumors basically consists of four modules: the main interface, the interface with enhanced images, the classification interface when there are nodules and the classification interface when there are no nodules, as specified in Figure 9.
Figure 9a shows the main interface module, where it will be possible to upload the breast tumor images to be analyzed. After uploading, the web portal redirects users to the second interface. This module presents a menu-driven interface with the users, which will be capable of analyzing the pre-processed images by the image enhancement methods, as well as the classification result. When clicking on the “Enhanced images” button in the menu, the users will be redirected to the interface containing the specified image with the application of image enhancement methods, as shown in Figure 9b.
By clicking on “Classification results” in the menu, the users will be presented with a message. If there is a nodule, a message will be displayed to consult a mastologist (as shown in Figure 9c), if there is no nodule, the user will be informed that no nodule was found in the image (as shown in Figure 9d).

4. Conclusions and Future Work

This research proposed the development of an online platform to compare and evaluate methods of enhancing breast ultrasound images, classifying them as benign, malignant and normal. For this purpose, a database containing 437 benign images, 210 malignant images and 133 normal images were used. Due to the small number of images, various data augmentation techniques were used. To extract the characteristics of these images, Resnet50 was used. For these extracted data, the SVM, kNN and MLP algorithms were used to perform the classification.
It was found through the experimental results that the application of appropriate image enhancement methods can improve classification performance. In addition, the algorithms that obtained better results in the metrics for achieving better image quality provided the best results in the classification.
It is observed that the Bilateral enhancement method together with the SVM classification algorithm obtained the highest hit rate, both for global accuracy and for accuracy in the detection of malignant tumors. Therefore, it is noted that the application of image enhancement methods has significant relevance to the diagnosis of breast cancer.

Future Work

In this context, the following future works are intended to open up better opportunities for the researchers:
  • Developing novel light image enhancement strategies specific for breast cancer, considering the applied generic enhancement algorithms;
  • Embedding such new enhancement approaches in specific hardware modules with the possibility of interacting with the cloud;
  • Building 3D reconstruction models to perform the volumetric quantification of the nodules;
  • Building a web dashboard to analyze the experiments as well as the 3D reconstruction with better visualizations.

Author Contributions

Conceptualization, S.K.J. and V.H.C.d.A.; methodology, S.K.J., D.S.d.S., C.S.N. and V.H.C.d.A.; software, D.S.d.S. and C.S.N.; validation, S.K.J. and V.H.C.d.A.; formal analysis, S.K.J. and V.H.C.d.A.; investigation, D.S.d.S. and C.S.N.; writing original draft preparation, D.S.d.S. and C.S.N.; writing review and editing, S.K.J. and V.H.C.d.A.; visualization, D.S.d.S. and V.H.C.d.A.; supervision, S.K.J. and V.H.C.d.A.; project administration, S.K.J. and V.H.C.d.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, M. Research on the Detection Method of Breast Cancer Deep Convolutional Neural Network Based on Computer Aid. In Proceedings of the 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), IEEE, Dalian, China, 14–16 April 2021; pp. 536–540. [Google Scholar]
  2. Matic, Z.; Kadry, S. Tumor Segmentation in Breast MRI Using Deep Learning. In Proceedings of the 2022 Fifth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU), Riyadh, Saudi Arabia, 28–29 March 2022; pp. 49–51. [Google Scholar] [CrossRef]
  3. Ahmed, M.; Islam, M.R. Breast Cancer Classification from Histopathological Images using Convolutional Neural Network. In Proceedings of the 2021 International Conference on Computer, Communication, Chemical, Materials and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 26–27 December 2021; pp. 1–4. [Google Scholar] [CrossRef]
  4. Khumdee, M.; Assawaroongsakul, P.; Phasukkit, P.; Houngkamhang, N. Breast Cancer Detection using IR-UWB with Deep Learning. In Proceedings of the 2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP), Ayutthaya, Thailand, 21–23 December 2021; pp. 1–4. [Google Scholar] [CrossRef]
  5. Afaq, S.; Jain, A. MAMMO-Net: An Approach for Classification of Breast Cancer using CNN with Gabor Filter in Mammographic Images. In Proceedings of the 2022 International Conference on Computational Intelligence and Sustainable Engineering Solutions (CISES), Greater Noida, India, 20–21 May 2022; pp. 177–182. [Google Scholar] [CrossRef]
  6. Wang, Y.; Wang, N.; Xu, M.; Yu, J.; Qin, C.; Luo, X.; Yang, X.; Wang, T.; Li, A.; Ni, D. Deeply-Supervised Networks With Threshold Loss for Cancer Detection in Automated Breast Ultrasound. IEEE Trans. Med. Imaging 2020, 39, 866–876. [Google Scholar] [CrossRef] [PubMed]
  7. Wu, H.; Huo, Y.; Pan, Y.; Xu, Z.; Huang, R.; Xie, Y.; Han, C.; Liu, Z.; Wang, Y. Learning Pre- and Post-contrast Representation for Breast Cancer Segmentation in DCE-MRI. In Proceedings of the 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS), Shenzhen, China, 21–23 July 2022; pp. 355–359. [Google Scholar] [CrossRef]
  8. Huang, C.; Song, P.; Gong, P.; Trzasko, J.D.; Manduca, A.; Chen, S. Debiasing-Based Noise Suppression for Ultrafast Ultrasound Microvessel Imaging. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2019, 66, 1281–1291. [Google Scholar] [CrossRef] [PubMed]
  9. Moshrefi, A.; Nabki, F. An Efficient Method to Enhance the Quality of Ultrasound Medical Images. In Proceedings of the 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Lansing, MI, USA, 9–11 August 2021; pp. 267–270. [Google Scholar] [CrossRef]
  10. de Souza, R.W.; Silva, D.S.; Passos, L.A.; Roder, M.; Santana, M.C.; Pinheiro, P.R.; de Albuquerque, V.H.C. Computer-assisted Parkinson’s disease diagnosis using fuzzy optimum- path forest and Restricted Boltzmann Machines. Comput. Biol. Med. 2021, 131, 104260. [Google Scholar] [CrossRef] [PubMed]
  11. Qadri, S.F.; Shen, L.; Ahmad, M.; Qadri, S.; Zareen, S.S.; Akbar, M.A. SVseg: Stacked sparse autoencoder-based patch classification modeling for vertebrae segmentation. Mathematics 2022, 10, 796. [Google Scholar] [CrossRef]
  12. Afonso, L.C.; Rosa, G.H.; Pereira, C.R.; Weber, S.A.; Hook, C.; Albuquerque, V.H.C.; Papa, J.P. A recurrence plot-based approach for Parkinson’s disease identification. Future Gener. Comput. Syst. 2019, 94, 282–292. [Google Scholar] [CrossRef]
  13. Gao, Z.; Wang, X.; Sun, S.; Wu, D.; Bai, J.; Yin, Y.; Liu, X.; Zhang, H.; de Albuquerque, V.H.C. Learning physical properties in complex visual scenes: An intelligent machine for perceiving blood flow dynamics from static CT angiography imaging. Neural Netw. 2020, 123, 82–93. [Google Scholar] [CrossRef]
  14. Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; Albuquerque, V.H.C.d. Multi-Class Skin Lesion Detection and Classification via Teledermatology. IEEE J. Biomed. Health Inform. 2021, 25, 4267–4275. [Google Scholar] [CrossRef]
  15. Cao, L.; Wang, W.; Huang, C.; Xu, Z.; Wang, H.; Jia, J.; Chen, S.; Dong, Y.; Fan, C.; de Albuquerque, V.H.C. An Effective Fusing Approach by Combining Connectivity Network Pattern and Temporal-Spatial Analysis for EEG-Based BCI Rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2264–2274. [Google Scholar] [CrossRef]
  16. Chen, J.; Zheng, Y.; Liang, Y.; Zhan, Z.; Jiang, M.; Zhang, X.; Daniel, S.d.S.; Wu, W.; Albuquerque, V.H.C.d. Edge2Analysis: A Novel AIoT Platform for Atrial Fibrillation Recognition and Detection. IEEE J. Biomed. Health Inform. 2022. [Google Scholar] [CrossRef]
  17. de Mesquita, V.A.; Cortez, P.C.; Ribeiro, A.B.; de Albuquerque, V.H.C. A novel method for lung nodule detection in computed tomography scans based on Boolean equations and vector of filters techniques. Comput. Electr. Eng. 2022, 100, 107911. [Google Scholar] [CrossRef]
  18. Huang, C.; Zhang, G.; Chen, S.; de Albuquerque, V.H.C. An Intelligent Multisampling Tensor Model for Oral Cancer Classification. IEEE Trans. Ind. Inform. 2022, 18, 7853–7861. [Google Scholar] [CrossRef]
  19. Mohamed, A.; Wahba, A.A.; Sayed, A.M.; Haggag, M.A.; El-Adawy, M.I. Enhancement of Ultrasound Images Quality Using a New Matching Material. In Proceedings of the 2019 International Conference on Innovative Trends in Computer Engineering (ITCE), Aswan, Egypt, 2–4 February 2019; pp. 47–51. [Google Scholar] [CrossRef]
  20. Singh, P.; Mukundan, R.; Ryke, R.d. Feature Enhancement in Medical Ultrasound Videos Using Multifractal and Contrast Adaptive Histogram Equalization Techniques. In Proceedings of the 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, 28–30 March 2019; pp. 240–245. [Google Scholar] [CrossRef] [Green Version]
  21. Latif, G.; Butt, M.; Al Anezi, F.Y.; Alghazo, J. Remoção de manchas de imagem por ultrassom e detecção de câncer de mama usando Deep CNN. In Proceedings of the 2020 RIVF International Conference on Computing and Communication Technologies (RIVF), Ho Chi Minh City, Vietnam, 14–15 October 2020. [Google Scholar] [CrossRef]
  22. Zhao, H.; Niu, J.; Meng, H.; Wang, Y.; Li, Q.; Yu, Z. Focal U-Net: A Focal Self-attention based U-Net for Breast Lesion Segmentation in Ultrasound Images. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; pp. 1506–1511. [Google Scholar] [CrossRef]
  23. Jahwar, A.F.; Mohsin Abdulazeez, A. Segmentation and Classification for Breast Cancer Ultrasound Images Using Deep Learning Techniques: A Review. In Proceedings of the 2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA), Selangor, Malaysia, 12 May 2022; pp. 225–230. [Google Scholar] [CrossRef]
  24. Dabass, J.; Arora, S.; Vig, R.; Hanmandlu, M. Segmentation Techniques for Breast Cancer Imaging Modalities-A Review. In Proceedings of the 2019 9th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 10–11 January 2019; pp. 658–663. [Google Scholar] [CrossRef]
  25. Chen, C.; Wang, Y.; Niu, J.; Liu, X.; Li, Q.; Gong, X. Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos. IEEE Trans. Med. Imaging 2021, 40, 2439–2451. [Google Scholar] [CrossRef] [PubMed]
  26. Badawy, S.M.; Mohamed, A.E.N.A.; Hefnawy, A.A.; Zidan, H.E.; GadAllah, M.T.; El-Banby, G.M. Classification of Breast Ultrasound Images Based on Convolutional Neural Networks—A Comparative Study. In Proceedings of the 2021 International Telecommunications Conference (ITC-Egypt), Alexandria, Egypt, 13–15 July 2021; pp. 1–8. [Google Scholar] [CrossRef]
  27. Wang, Y.; Lei, B.; Elazab, A.; Tan, E.L.; Wang, W.; Huang, F.; Gong, X.; Wang, T. Breast Cancer Image Classification via Multi-Network Features and Dual-Network Orthogonal Low-Rank Learning. IEEE Access 2020, 8, 27779–27792. [Google Scholar] [CrossRef]
  28. Yaganteeswarudu, A. Multi Disease Prediction Model by using Machine Learning and Flask API. In Proceedings of the 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; pp. 1242–1246. [Google Scholar] [CrossRef]
  29. Mufid, M.R.; Basofi, A.; Al Rasyid, M.U.H.; Rochimansyah, I.F.; rokhim, A. Design an MVC Model using Python for Flask Framework Development. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019; pp. 214–219. [Google Scholar] [CrossRef]
  30. Ahmed, I.; Ahmad, A.; Jeon, G. An IoT-Based Deep Learning Framework for Early Assessment of Covid-19. IEEE Internet Things J. 2021, 8, 15855–15862. [Google Scholar] [CrossRef]
  31. Munadi, K.; Muchtar, K.; Maulina, N.; Pradhan, B. Image Enhancement for Tuberculosis Detection Using Deep Learning. IEEE Access 2020, 8, 217897–217907. [Google Scholar] [CrossRef]
  32. Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief 2020, 28, 104863. [Google Scholar] [CrossRef]
  33. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Bombay, India, 7 January 1998; pp. 839–846. [Google Scholar] [CrossRef]
  34. Zhihong, W.; Xiaohong, X. Study on Histogram Equalization. In Proceedings of the 2011 2nd International Symposium on Intelligence Information Processing and Trusted Computing, Wuhan, China, 22–23 October 2011; pp. 177–179. [Google Scholar] [CrossRef]
  35. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20, 89–97. [Google Scholar]
  36. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
  37. Ying, Z.; Li, G.; Ren, Y.; Wang, R.; Wang, W. A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework. In Proceedings of the Computer Analysis of Images and Patterns, Ystad, Sweden, 22–24 August 2017; Felsberg, M., Heyden, A., Krüger, N., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 36–46. [Google Scholar]
  38. Cao, G.; Huang, L.; Tian, H.; Huang, X.; Wang, Y.; Zhi, R. Contrast enhancement of brightness-distorted images by improved adaptive gamma correction. Comput. Electr. Eng. 2018, 66, 569–582. [Google Scholar] [CrossRef] [Green Version]
  39. Ullah, H.; Muhammad, K.; Irfan, M.; Anwar, S.; Sajjad, M.; Imran, A.S.; de Albuquerque, V.H.C. Light-DehazeNet: A Novel Lightweight CNN Architecture for Single Image Dehazing. IEEE Trans. Image Process. 2021, 30, 8968–8982. [Google Scholar] [CrossRef]
  40. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1777–1786. [Google Scholar] [CrossRef]
  41. Wang, Y.; Wan, R.; Yang, W.; Li, H.; Chau, L.P.; Kot, A.C. Low-Light Image Enhancement with Normalizing Flow. arXiv 2021. [Google Scholar] [CrossRef]
  42. Muhammad, K.; Hussain, T.; Tanveer, M.; Sannino, G.; de Albuquerque, V.H.C. Cost-Effective Video Summarization Using Deep CNN With Hierarchical Weighted Fusion for IoT Surveillance Networks. IEEE Internet Things J. 2020, 7, 4455–4463. [Google Scholar] [CrossRef]
  43. Khan, S.; Muhammad, K.; Mumtaz, S.; Baik, S.W.; de Albuquerque, V.H.C. Energy-Efficient Deep CNN for Smoke Detection in Foggy IoT Environment. IEEE Internet Things J. 2019, 6, 9237–9245. [Google Scholar] [CrossRef]
  44. Hussain, T.; Muhammad, K.; Ullah, A.; Cao, Z.; Baik, S.W.; de Albuquerque, V.H.C. Cloud-Assisted Multiview Video Summarization Using CNN and Bidirectional LSTM. IEEE Trans. Ind. Inform. 2020, 16, 77–86. [Google Scholar] [CrossRef]
  45. Muhammad, K.; Mustaqeem; Ullah, A.; Imran, A.S.; Sajjad, M.; Kiran, M.S.; Sannino, G.; de Albuquerque, V.H.C. Human action recognition using attention based LSTM network with dilated CNN features. Future Gener. Comput. Syst. 2021, 125, 820–830. [Google Scholar] [CrossRef]
  46. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  47. Liu, X.; Wu, Z.; Tang, C. Modulation Recognition Algorithm Based on ResNet50 Multi-feature Fusion. In Proceedings of the 2021 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS), Xi’an, China, 27–28 March 2021; pp. 677–680. [Google Scholar] [CrossRef]
  48. Yang, X.; Yang, D.; Huang, C. An interactive prediction system of breast cancer based on ResNet50, chatbot and PyQt. In Proceedings of the 2021 2nd International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), Shanghai, China, 15–17 October 2021; pp. 309–316. [Google Scholar] [CrossRef]
  49. Wan, Z.; Gu, T. A Bootstrapped Transfer Learning Model Based on ResNet50 and Xception to Classify Buildings Post Hurricane. In Proceedings of the 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), Zhuhai, China, 24–26 September 2021; pp. 265–270. [Google Scholar] [CrossRef]
  50. Wang, Y.; Zhao, Z.; He, J.; Zhu, Y.; Wei, X. A method of vehicle flow training and detection based on ResNet50 with CenterNet method. In Proceedings of the 2021 International Conference on Communications, Information System and Computer Engineering (CISCE), Beijing, China, 14–16 May 2021; pp. 335–339. [Google Scholar] [CrossRef]
  51. Dutta, J.; Chanda, D. Music Emotion Recognition in Assamese Songs using MFCC Features and MLP Classifier. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 25–17 June 2021; pp. 1–5. [Google Scholar] [CrossRef]
  52. Ahil, M.N.; Vanitha, V.; Rajathi, N. Apple and Grape Leaf Disease Classification using MLP and CNN. In Proceedings of the 2021 International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation (ICAECA), Coimbatore, India, 8–9 October 2021; pp. 1–4. [Google Scholar] [CrossRef]
  53. Wang, H.; Wang, J. Short Term Wind Speed Forecasting Based on Feature Extraction by CNN and MLP. In Proceedings of the 2021 2nd International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), Nanjing, China, 6–8 August 2021; pp. 191–197. [Google Scholar] [CrossRef]
  54. Liu, H.; Xiao, X.; Li, Y.; Mi, Q.; Yang, Z. Effective Data Classification via Combining Neural Networks and SVM. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 4006–4009. [Google Scholar] [CrossRef]
  55. Kr, K.; Kv, A.R.; Pillai, A. An Improved Feature Selection and Classification of Gene Expression Profile using SVM. In Proceedings of the 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), Kannur, India, 5–6 June 2019; Volume 1, pp. 1033–1037. [Google Scholar] [CrossRef]
  56. Turesson, H.K.; Ribeiro, S.; Pereira, D.R.; Papa, J.P.; de Albuquerque, V.H.C. Machine learning algorithms for automatic classification of marmoset vocalizations. PLoS ONE 2016, 11, e0163041. [Google Scholar] [CrossRef] [Green Version]
  57. Salim, A.P.; Laksitowening, K.A.; Asror, I. Time Series Prediction on College Graduation Using KNN Algorithm. In Proceedings of the 2020 8th International Conference on Information and Communication Technology (ICoICT), Yogyakarta, Indonesia, 24–26 June 2020; pp. 1–4. [Google Scholar] [CrossRef]
  58. Ma, J.; Li, J.; Wang, W. Application of Wavelet Entropy and KNN in Motor Fault Diagnosis. In Proceedings of the 2022 4th International Conference on Communications, Information System and Computer Engineering (CISCE), Shenzhen, China, 27–29 May 2022; pp. 221–226. [Google Scholar] [CrossRef]
  59. Sabilla, I.A.; Meirisdiana, M.; Sunaryono, D.; Husni, M. Best Ratio Size of Image in Steganography using Portable Document Format with Evaluation RMSE, PSNR, and SSIM. In Proceedings of the 2021 4th International Conference of Computer and Informatics Engineering (IC2IE), Depok, Indonesia, 14–15 September 2021; pp. 289–294. [Google Scholar] [CrossRef]
  60. Rodriguez-Molares, A.; Hoel Rindal, O.M.; D’hooge, J.; Måsøy, S.E.; Austeng, A.; Torp, H. The Generalized Contrast-to-Noise Ratio. In Proceedings of the 2018 IEEE International Ultrasonics Symposium (IUS), Kobe, Japan, 22–25 October 2018; pp. 1–4. [Google Scholar] [CrossRef]
  61. Harun, N.H.; Bakar, J.A.; Wahab, Z.A.; Osman, M.K.; Harun, H. Color Image Enhancement of Acute Leukemia Cells in Blood Microscopic Image for Leukemia Detection Sample. In Proceedings of the 2020 IEEE 10th Symposium on Computer Applications & Industrial Electronics (ISCAIE), Penang, Malaysia, 18–19 April 2020; pp. 24–29. [Google Scholar] [CrossRef]
  62. AbdAlRahman, A.; Ismail, S.M.; Said, L.A.; Radwan, A.G. Double Fractional-order Masks Image Enhancement. In Proceedings of the 2021 3rd Novel Intelligent and Leading Emerging Sciences Conference (NILES), Giza, Egypt, 23–25 October 2021; pp. 261–264. [Google Scholar] [CrossRef]
Figure 1. Sequence of stages involved in the proposed clinical decision-making system.
Figure 1. Sequence of stages involved in the proposed clinical decision-making system.
Sensors 22 08818 g001
Figure 2. Examples of images of each class. (a) Normal (b) Benign (c) Malignant.
Figure 2. Examples of images of each class. (a) Normal (b) Benign (c) Malignant.
Sensors 22 08818 g002
Figure 3. The confusion matrix highlighting the performance accuracy of the classification of mammogram images, with the diagonal elements in bold representing the true prediction labels.
Figure 3. The confusion matrix highlighting the performance accuracy of the classification of mammogram images, with the diagonal elements in bold representing the true prediction labels.
Sensors 22 08818 g003
Figure 4. Normal highlighted mammogram images (a) Original (b) Bilateral (c) Gamma correction (d) HE (e) LDNET (f) LIME (g) LLFLOW (h) TV (i) Ying (j) Zero-DCE.
Figure 4. Normal highlighted mammogram images (a) Original (b) Bilateral (c) Gamma correction (d) HE (e) LDNET (f) LIME (g) LLFLOW (h) TV (i) Ying (j) Zero-DCE.
Sensors 22 08818 g004
Figure 5. Benign highlighted mammogram images (a) Original (b) Bilateral (c) Gamma correction (d) HE (e) LDNET (f) LIME (g) LLFLOW (h) TV (i) Ying (j) Zero-DCE.
Figure 5. Benign highlighted mammogram images (a) Original (b) Bilateral (c) Gamma correction (d) HE (e) LDNET (f) LIME (g) LLFLOW (h) TV (i) Ying (j) Zero-DCE.
Sensors 22 08818 g005
Figure 6. Malignant highlighted mammogram images (a) Original (b) Bilateral (c) Gamma correction (d) HE (e) LDNET (f) LIME (g) LLFLOW (h) TV (i) Ying (j) Zero-DCE.
Figure 6. Malignant highlighted mammogram images (a) Original (b) Bilateral (c) Gamma correction (d) HE (e) LDNET (f) LIME (g) LLFLOW (h) TV (i) Ying (j) Zero-DCE.
Sensors 22 08818 g006
Figure 7. Confusion matrix for the combination of the Bilateral algorithm with the SVM classifier.
Figure 7. Confusion matrix for the combination of the Bilateral algorithm with the SVM classifier.
Sensors 22 08818 g007
Figure 8. Database images. (a) Normal image with region marked in red that resembles a benign nodule. (b) Image with a benign nodule. (c) Image with malignant nodule.
Figure 8. Database images. (a) Normal image with region marked in red that resembles a benign nodule. (b) Image with a benign nodule. (c) Image with malignant nodule.
Sensors 22 08818 g008
Figure 9. Online web platform interfaces modules. (a) Main interface module (b) Module with images enhancement (c) Classification module with nodules (d) Nodule-free sorting module.
Figure 9. Online web platform interfaces modules. (a) Main interface module (b) Module with images enhancement (c) Classification module with nodules (d) Nodule-free sorting module.
Sensors 22 08818 g009aSensors 22 08818 g009b
Table 1. Base data of the images considered for analysis.
Table 1. Base data of the images considered for analysis.
InformationAmountPercent
Normal Images13317.05
Benign Images43756.03
Malignant Images21026.92
Table 2. Comparison of processing time taken by image enhancement algorithms (in minutes).
Table 2. Comparison of processing time taken by image enhancement algorithms (in minutes).
AlgorithmsProcessing Time (minutes)
Bilateral1344.5736
Gamma correction2.8820
HE7.1598
LDNet6.7305
LIME651.8196
LLFLOW9.2683
TV17.3219
Ying8.8536
ZDCE9.5254
Table 3. Quality metrics for image enhancement algorithms.
Table 3. Quality metrics for image enhancement algorithms.
AlgorithmsRMSECNRAMBEAGPSNRSSIM
Bilateral0.03290.03170.00630.077.80540.8726
Gamma correction0.09360.40600.07630.036336.72920.8924
HE0.23911.11350.21610.036461.21570.6347
LDNet0.23600.95080.19410.036160.77780.3705
LIME0.28011.09460.22720.059.46880.3778
LLFLOW0.13420.55780.10690.018466.31590.8116
TV0.02270.00880.00173.86e-0581.07920.8569
Ying0.06910.31250.06210.167671.60030.9394
ZDCE0.13340.60590.11890.036265.69910.7858
Table 4. Training and testing times of image enhancement algorithms and the classifiers.
Table 4. Training and testing times of image enhancement algorithms and the classifiers.
AlgorithmsTimes (seconds)MLPkNNSVM
OriginalTraining161.6050.105109.754
Test0.0361.1195.582
BilateralTraining0.10760.107114,538
Test0.0341.1055.741
Gamma correctionTraining142.1770.106113.221
Test0.0391.2315.801
HETraining164.5930.104117.841
Test0.0331.1765.906
LDNETTraining197.5220.106119.007
Test0.0401.1436.101
LIMETraining158.6750.105110.196
Test0.0321.1935.622
LLFLOWTraining120.5240.106114.953
Test0.0321.2055.803
TVTraining141.4090.106115.173
Test0.0311.2145.900
YingTraining156.6280.105115.086
Test0.0331.1945.874
Z-DCETraining169.4220.105123.348
Test0.0371.1226.170
Table 5. Metrics for image enhancement algorithms.
Table 5. Metrics for image enhancement algorithms.
AlgorithmsMetricsMLPkNNSVM
OriginalACC Global94.99 ± 1.6577.85 ± 3.3096.50 ± 1.82
Benign97.08 ± 1.5076.44 ± 4.9897.42 ± 1.44
Malignant92.50 ± 3.2171.90 ± 5.9094.40 ± 2.94
Normal92.15 ± 7.5091.91 ± 5.8796.80 ± 3.99
F1-score94.96 ± 1.6978.57 ± 3.2096.49 ± 1.56
BilateralACC Global95.54 ± 1.3679.07 ± 2.4596.69 ± 1.56
Benign96.68 ± 1.9178.78 ± 3.0697.31 ± 1.94
Malignant92.61 ± 3.4473.21 ± 5.9295.11 ± 2.86
Normal96.41 ± 4.4189.31 ± 5.6597.18 ± 3.54
F1-score95.53 ± 1.3779.70 ± 2.4196.69 ± 1.56
Gamma correctionACC Global94.93 ± 1.5977.91 ± 2.6696.47 ± 1.33
Benign96.74 ± 1.5777.17 ± 4.0897.54 ± 1.31
Malignant91.90 ± 3.8670.83 ± 5.5293.92 ± 3.32
Normal93.83 ± 5.9091.54 ± 6.4597.00 ± 3.66
F1-score94.91 ± 1.6178.72 ± 2.6596.46 ± 1.35
HEACC Global95.28 ± 1.7178.07 ± 2.6296.31 ± 1.38
Benign96.45 ± 2.0178.20 ± 4.2197.25 ± 1.50
Malignant92.97 ± 3.8775.23 ± 5.8094.40 ± 2.84
Normal95.15 ± 4.5582.15 ± 7.8896.26 ± 3.51
F1-score95.27 ± 1.7178.45 ± 2.5496.30 ± 1.39
LDNETACC Global94.96 ± 1.9683.23 ± 2.5496.31 ± 1.52
Benign96.90 ± 1.8184.61 ± 3.0797.19 ± 1.59
Malignant93.09 ± 4.6376.90 ± 5.4894.40 ± 3.55
Normal91.50 ± 7.4888.73 ± 6.9596.45 ± 4.60
F1-score94.93 ± 1.9983.44 ± 2.4696.30 ± 1.53
LIMEACC Global95.51 ± 1.4681.57 ± 2.5796.47 ± 1.24
Benign96.91 ± 1.7282.21 ± 3.4597.25 ± 1.36
Malignant92.38 ± 3.4975.00 ± 5.9094.16 ± 3.14
Normal95.88 ± 3.7189.87 ± 6.4797.56 ± 2.45
F1-score95.50 ± 1.4682.00 ± 2.5596.46 ± 1.24
LLFLOWACC Global95.06 ± 1.6675.96 ± 3.4096.34 ± 1.07
Benign96.23 ± 2.2575.74 ± 3.7397.25 ± 1.32
Malignant92.49 ± 3.4768.09 ± 6.7993.69 ± 2.94
Normal95.34 ± 6.8289.10 ± 6.3397.57 ± 3.77
F1-score95.04 ± 1.6676.75 ± 3.2596.33 ± 1.07
TVACC Global95.64 ± 1.4177.33 ± 3.2196.66 ± 1.66
Benign96.97 ± 1.5773.97 ± 5.2597.42 ± 1.57
Malignant93.09 ± 3.1876.30 ± 5.0294.88 ± 2.84
Normal95.31 ± 4.3890.05 ± 5.4497.00 ± 3.86
F1-score95.63 ± 1.4278.08 ± 3.0696.66 ± 1.67
YingACC Global95.22 ± 1.5678.14 ± 3.3596.57 ± 1.39
Benign96.22 ± 1.7676.90 ± 5.0197.37 ± 1.44
Malignant93.80 ± 3.9473.21 ± 7.2594.64 ± 3.18
Normal94.18 ± 6.4290.02 ± 5.9796.99 ± 3.69
F1-score95.21 ± 1.5878.93 ± 3.1696.56 ± 1.39
Z-DCEACC Global94.23 ± 2.1876.89 ± 2.9695.80 ± 1.35
Benign96.68 ± 1.5678.56 ± 5.1897.14 ± 1.46
Malignant92.14 ± 4.0669.40 ± 5.5493.09 ± 2.90
Normal89.52 ± 9.4183.31 ± 7.3695.70 ± 3.74
F1-score94.17 ± 2.2577.25 ± 2.9495.79 ± 1.35
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

da Silva, D.S.; Nascimento, C.S.; Jagatheesaperumal, S.K.; Albuquerque, V.H.C.d. Mammogram Image Enhancement Techniques for Online Breast Cancer Detection and Diagnosis. Sensors 2022, 22, 8818. https://doi.org/10.3390/s22228818

AMA Style

da Silva DS, Nascimento CS, Jagatheesaperumal SK, Albuquerque VHCd. Mammogram Image Enhancement Techniques for Online Breast Cancer Detection and Diagnosis. Sensors. 2022; 22(22):8818. https://doi.org/10.3390/s22228818

Chicago/Turabian Style

da Silva, Daniel S., Caio S. Nascimento, Senthil K. Jagatheesaperumal, and Victor Hugo C. de Albuquerque. 2022. "Mammogram Image Enhancement Techniques for Online Breast Cancer Detection and Diagnosis" Sensors 22, no. 22: 8818. https://doi.org/10.3390/s22228818

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop