Next Article in Journal
FLINT: Flows for the Internet of Things
Previous Article in Journal
Health Care Resource Use at End of Life in Patients with Advanced Lung Cancer
Article

Dermoscopy Images Enhancement via Multi-Scale Morphological Operations

1
Computer Engineer Department, Universidad Americana, Asunción 1206, Paraguay
2
Facultad Politécnica, Universidad Nacional de Asunción, San Lorenzo 111421, Paraguay
3
Data Science and Big Data Lab, Universidad Pablo de Olavide, 41013 Seville, Spain
4
Department of Computer and Electronics, Universidade Federal do Espírito Santo, São Mateus 29932-540, ES, Brazil
5
Sígnal Theory and communications Department, Universitat Politècnica de Catalunya, 08034 Barcelona, Spain
6
Hospital de Clínicas, Facultad de Ciencias Médicas, Universidad Nacional de Asunción, San Lorenzo 111421, Paraguay
7
Facultad de Ciencias Exactas y Tecnológicas, Universidad Nacional de Conepción, Concepción 010123, Paraguay
*
Author to whom correspondence should be addressed.
Academic Editor: Cecilia Di Ruberto
Appl. Sci. 2021, 11(19), 9302; https://doi.org/10.3390/app11199302
Received: 9 September 2021 / Revised: 29 September 2021 / Accepted: 30 September 2021 / Published: 7 October 2021
(This article belongs to the Topic Medical Image Analysis)

Abstract

Skin dermoscopy images frequently lack contrast caused by varying light conditions. Indeed, often low contrast is seen in dermoscopy images of melanoma, causing the lesion to blend in with the surrounding skin. In addition, the low contrast prevents certain details from being seen in the image. Therefore, it is necessary to design an approach that can enhance the contrast and details of dermoscopic images. In this work, we propose a multi-scale morphological approach to reduce the impacts of lack of contrast and to enhance the quality of the images. By top-hat reconstruction, the local bright and dark features are extracted from the image. The local bright features are added and the dark features are subtracted from the image. In this way, images with higher contrast and detail are obtained. The proposed approach was applied to a database of 236 color images of benign and malignant melanocytic lesions. The results show that the multi-scale morphological approach by reconstruction is a competitive algorithm since it achieved a very satisfactory level of contrast enhancement and detail enhancement in dermoscopy images.
Keywords: skin dermoscopy images; multi-scale morphological approach; top-hat reconstruction; contrast enhancement skin dermoscopy images; multi-scale morphological approach; top-hat reconstruction; contrast enhancement

1. Introduction

Medical images are the visual representations of the interior of a body. These visual representations have facilitated some health care tasks such as diagnosing diseases. Despite technological advances in recent years, image acquisition, storage, or transmission still suffer from various types of degradation [1]. These factors can cause inefficient or inaccurate diagnoses, thus compromising the healing of patients.
Many techniques have been proposed to improve the contrast of medical images [2]. The traditional histogram equalization (HE) [3], one of the most popular techniques, was the first attempt to automatically improve contrast. HE distributes the gray levels within the image (each gray level has an equal chance to occur) to enhance contrast and brightness. Studies have shown that HE introduces saturation and over-enhancement in the images [4,5]. Several improved techniques have been proposed to maintain average image brightness, reducing saturation effects, thus avoiding unnatural image enhancement. Some of these techniques are: brightness preserving bi-histogram equalization (BBHE) [6], dual sub-image histogram equalization (DSIHE) [7], minimum mean brightness error bi-histogram equalization (MMBEBHE) [8], and quadri-histogram equalization with limited contrast (QHELC) [9].
With the emergence of mathematical morphology (MM) based approaches in recent decades, new techniques have been developed for contrast enhancement [10,11,12,13]. Due to its ability to extract dark and light features from images using structuring elements of different shapes and sizes [14], the top-hat transformation has received a lot of attention. In [15], the top-hat transformation was used to correct the illumination of images with melanocytic lesions as a preprocessing for a subsequent feature extraction study. In [16], a method for segmentation of retinal blood vessels is presented. Vessel enhancement is performed using the contrast enhancement technique based on the top-hat transform.
Various authors have proposed to use a multi-scale approach, called multi-scale top-hat transformation (MTH). An advantage of MTH is that it allows to process the image content from the most global to the most detailed level. Several works propose to improve different types of medical images by integrating MTH in a morphological based image enhancement approach [10,17,18,19,20,21,22,23,24]. Currently, in the field of computer vision, MTH-based algorithms are used as a preliminary step for other applications based on artificial intelligence. For example, in [25], a deep learning approach using convolutional neural networks was proposed to detect vessel regions in angiography images. In this work, the multi-scale top-hat transform for contrast enhancement (MSTH) algorithm [10] was used to preprocess the images by enhancing their contrast. In [26], a method for edge detection in images based on top-hat operators with multidirectional and multiscale structuring elements was proposed.
A multimodal medical image fusion scheme based on multiscale top-hat transform combined with morphological contrast operators is presented in [27].
In [28], an automatic coronary artery segmentation approach was proposed. In this work, in the preprocessing stage, the input image was processed with the MSTH algorithm for better segmentation of coronary arteries. In [29], MSTH was used as the first step of an algorithm for detecting bright lesions in retinal images. In [30] proposed Sine-Net, an automated tool based on a fully convolutional deep learning architecture for the segmentation of blood vessels in retinal images. The architecture obtained better segmentation results on three databases by using the combination of the MSTH and contrast-limited adaptive histogram equalization (CLAHE) [31] algorithms in the preprocessing.
In this work an improvement of MTH is proposed. The underlying idea is to replace the opening and the closing operation with morphological filters by reconstruction. This replacement is done because the morphological operators by reconstruction are very attractive by avoiding damaging the image contour, the edge and many other important information of the medical image.
For such purpose MTH integrates the concept of geodesic reconstruction [32]. By combining the advantages of morphological reconstruction with MTH ability to extract dark and bright characteristics, the resulting strategy is a multi-scale morphological approach capable of enhancing medical images. Experiments show that the resulting skin dermoscopy images have less distortion, greater detail accuracy, and better contrast than different image enhancement approaches.
We can summarize the contributions in this work as follows:
(a)
Propose a new MTH based strategy which incorporates the geodesic reconstruction concept in combination with a mathematical morphological approach;
(b)
Design a novel contrast enhancement algorithm based on the proposed MTH approach.
The rest of the article is organized as follows. In Section 2, the basic concepts are introduced. Then, in Section 3, the proposed approach is described. The experiments and discussions of the results achieved are presented in Section 4. Finally, Section 5 presents the conclusions of the work.

2. Mathematical Morphology

In mathematical morphology (MM) the aim is to analyze and extract unknown structures contained in an image. For such purpose, it uses a structuring element of known shape and size, and the erosion and dilation operators [14]. By providing a wide range of filters represented by the combination of these two basic operators, MM offers efficient tools and represents a relatively simple and powerful tool in terms of image analysis.

2.1. Dilation and Erosion

Given an image I, the morphological dilation δ H ( I ) and erosion ε H ( I ) of I using the structuring element H at the pixel x with respect to the structuring element H of domain D H , are defined as follows [14]:
δ H ( I ) ( x ) = max { I ( x y ) , y D H } , ε H ( I ) ( x ) = min { I ( x + y ) , y D H } .

2.2. Opening and Closing

Opening γ ( I , m H ) operator is the sequential combination of erosion ε m H ( I ) with m H (structuring element H of size m) followed by dilation δ m H ˜ ( I ) . Closing, on the other hand, ϕ ( I , m H ) is the sequential combination of dilation δ m H ( I ) with m H followed by erosion ε m H ˜ ( I ) . Both operators are defined as [14]:
γ ( I , m H ) = δ m H ˜ ( ε m H ( I ) ) , ϕ ( I , m H ) = ε m H ˜ ( δ m H ( I ) ) ,
where m is the size of the structuring element and m H ˜ the reflection of m H . In the case of symmetrical structuring element, m H = m H ˜ .
Viewing an image as a two-dimensional surface in a three-dimensional space, applying opening (closing) has the consequence of removing peaks (or filling valleys) smaller than the structuring element.

2.3. Classical Top-Hat Transform

By taking the difference between the original image and its opening, some different peaks can be extracted. In a dual way, we can extract different valleys by making the difference between a closed image and the original one. Top-hat transform represents the mathematical formalism of this idea. During top-hat transform by opening ( W T H ) [14] is the rest of the original image I and its opening γ ( I , m H ) , top-hat transform by closing ( B T H ) [14] is the subtraction between the morphological closing ϕ ( I , m H ) and the original image I, defined as follows:
W T H ( I , m H ) = I γ ( I , m H ) , B T H ( I , m H ) = ϕ ( I , m H ) I ,
where m is the size of the structuring element.

2.4. Geodesic Transformation and Reconstruction

In geodesic transformations, two equally sized input images are used, denoted as marker and mask. The first image (marker) is modified by a morphological transformation and restricted below (geodesic dilation) or above (geodesic erosion) the second image (mask) [14].
Let J and I be the marker and mask images, respectively, with the same domain ( D J = D I ). Geodesic dilation δ I ( J ) and erosion ε I ( J ) can be defined as [14]:
δ I ( 1 ) ( J ) = δ I ( J ) I   with   J ( x ) I ( x ) , ε I ( 1 ) ( J ) = ε I ( J ) I   with   J ( x ) I ( x ) ,
where ∧ is the minimum between the pixels of J ( x ) and I ( x ) and ∨ is the maximum between the pixels of J ( x ) and I ( x ) . If we perform k times the geodesic dilation or erosion of J with respect to I, we have to δ I ( k ) ( J ) = δ I ( 1 ) ( J ) [ δ I ( k 1 ) ( J ) ] or ε I ( k ) ( J ) = ε I ( 1 ) ( J ) [ ε I ( k 1 ) ( J ) ] .
In practice, we can define geodesic reconstruction ρ I ( J ) and dual geodesic reconstruction ρ I * ( J ) as follows:
ρ I ( J ) = δ I ( i ) ( J ) with   i   such   as   δ I ( i ) ( J ) = δ I ( i + 1 ) ( J ) , ρ I * ( J ) = ε I ( i ) ( J ) with   i   such   as   ε I ( i ) ( J ) = ε I ( i + 1 ) ( J )
Similar to the standard opening and closing, opening γ ρ ( m ) and closing ϕ ρ ( m ) by reconstruction of an image I can be defined as follows [14]:
γ ρ ( m ) ( I ) = ρ I ( ε m H ( I ) ) , ϕ ρ ( m ) ( I ) = ρ I * ( δ m H ( I ) ) ,
where I is the mask image, ε m H ( I ) and δ m H ( I ) are the markers image and m is the size of the structuring element.
In the full experiment, the structuring element has the shape of a disk, and m indicates the size of the radius.

2.5. Top-Hat by Reconstruction

The use of morphological operators by reconstruction has shown that, contrary to the standard ones, they remove details without modifying the structure of remaining objects. Another significant advantage is that geodesic reconstructions use an elementary isotropic structuring element and it is not necessary to specify sizes like in standard morphological operators. Analogously to the standard top-hat transform, it is possible to preserve or remove structures through geodesic reconstruction (dual geodesic reconstruction) that will have the role of opening (closing).
Although structures removed in the image I from the opening by reconstruction can be recovered with the white top-hat transform by reconstruction (RWTH), similarly, we can recover structures removed from the closing with the dark top-hat transformation by reconstruction (RBTH) as follows [14]:
R W T H ( m ) ( I ) = I γ ρ ( m ) ( I ) , R B T H ( m ) ( I ) = ϕ ρ ( m ) ( I ) I .

3. Proposed Algorithm

The proposed algorithm, called Multi-scale Geodesic Reconstruction based Top-Hat transform (MGRTH), is presented in this section. Additionally, all operations are presented in detail, step by step.
Let I be the image, H be the structuring element and n be the number of iterations. The proposed algorithm is divided into five stages.
First stage: 
the bright structures at level i are extracted by R W T H m as follows:
R W T H m = ρ ( W T H ( I , m H ) ) ( R W T H ( m ) ( I ) ) ,
where W T H ( I m H ) is the mask obtained by Equation (3), R W T H ( m ) ( I ) is the marker obtained by Equation (7). Each R W T H m represents the m-level of bright structures of the original image I controlled by m = { 1 , 2 , 3 , , n } which is the size of the structuring element, and the dark structures at level m are extracted by R B T H m as follows:
R B T H m = ρ ( B T H ( I , m H ) ) ( R B T H ( m ) ( I ) ) ,
where B T H ( I , m H ) is the mask obtained by Equation (3), R B T H ( m ) ( I ) is the marker obtained by Equation (7). Each R B T H m represents the m-level of dark structures of the original I controlled by m = { 1 , 2 , 3 , , n } the size of the structuring element.
Second stage: 
the light residues S W m are extracted from the dark structures at levels m and m 1 and the dark residues S B m are extracted from the light structures at levels m and m 1 as follows:
S W m 1 = R W T H m R W T H m 1 case   m = 2 R W T H m S W m 2 for   m > 2 ,
S B m 1 = R B T H m R B T H m 1 case   m = 2 R B T H m S B m 2 for   m > 2 .
Third stage: 
The maximum bright scaled details are computed from the bright structures extracted at the first stage by R W T H m , and the maximum dark scaled details are computed from the dark structures extracted at the first stage by R B T H m as follows:
S R W T H = 1 m n { R W T H m } ,
S R B T H = 1 m n { R B T H m } .
Fourth stage: 
The maximum light residues are computed from the light residues extracted at the second stage by S W m , and the maximum dark residues are computed from the dark residues extracted at the second stage by S B m as follows [13]:
S S W = 2 m n { S W m 1 } ,
S S B = 2 m n { S B m 1 } .
Final stage: 
the enhanced image I E is performed per pixel as follows:
I E = I + ω × max ( S R W T H , S S W ) ω × max ( S R B T H , S S B ) ,
where ω is a factor used to adjust the level of brightness or darkness added to the image.
Figure 1 shows the original melanoma images on the left (a,c) and the MGRTH-enhanced images on the right (b,d).

4. Results and Discussions

This section describes the experiments conducted to quantify the relative performance of the proposed algorithm. The database used in these experiments contains 236 color images of benign and malignant melanocytic lesions and used in [33]. For the tests on the RGB images, first the RGB images were converted to HSV images. Then, the algorithms and evaluations were applied to the V-channel of the images. Finally, the HSV images were converted to RGB images.
All algorithms were implemented using the ImageJ [34] library, for algorithms based on MM an extra library called MorphoLibJ [35] was used.
The results were evaluated with the metrics:
  • Entropy (E) [13,21,36]: E is used to measure the details in the image. E is defined as,
    E ( I ) = k = 0 L 1 P ( k ) l o g 2 ( P ( k ) ) ,
    where I is melanoma image, k is intensity of the pixel in the image, P ( k ) is probability of occurrence of the k-value in the image. If b is number of bits of the image then L is equal to 2 b ( b = 8 for grayscale images). An image is considered to have good detail when its entropy value is high;
  • Peak Signal-to-Noise Ratio (PSNR) [10,21,37]: PSNR measures how much distortion is added to the image in the contrast enhancement process. PSNR is defined as,
    P S N R ( I , I E ) = 10 × l o g 10 ( L 1 ) 2 M S E ( I , I E ) ,
    where Mean Squared Error (MSE) is defined as,
    M S E ( I , I E ) = 1 M × N x = 1 M y = 1 N ( I ( x , y ) I E ( x , y ) ) 2 .
    After the enhancement process, an image is considered to have low distortion if it has a high PSNR value;
  • Relative Enhancement in Contrast (REC) [36,38]: REC measures the contrast of the enhanced melanoma image. REC is defined as,
    R E C = C ( I E ) C ( I ) ,
    where I is the melanoma image, I E is the melanoma image with contrast enhancement, C is the image contrast and C is calculated as follows,
    C ( I ) = 20 × log 1 M N x = 1 M y = 1 N ( I ( x , y ) ) 2 μ 2 ,
    where M × N are the dimensions of the melanoma image and ( x , y ) are the spatial coordinates and μ is calculated as follows,
    μ = 1 M N x = 1 M y = 1 N I ( x , y ) .
    After image processing, if the REC value is higher than 1, the enhanced image is considered to have enhanced contrast.
To test the performance of the proposed method, we have considered two different experiments:
  • In the first part (Section 4.1), we performed parameter tuning to find good parameter values ω and n of the proposed algorithm. For this purpose, a comparison of the results obtained with respect to the number of iterations and the contrast adjustment weight was performed. The objective of this experiment was to observe the performance of the proposal with respect to the E, PSNR, and REC metrics;
  • Then, in the second part (Section 4.2) the proposed algorithm was compared with algorithms based on the multiscale top-hat transform and algorithms based on histogram equalization.
In both experiments, apart from the numerical results, a visual assessment of the dermatologist is presented.

4.1. Parameters Tuning

In this subsection, the parameter settings used in the MGRTH algorithm in relation to the number of iterations n and the contrast adjustment weight ω are described. For this purpose, the Shannon entropy Equation (17) and the PSNR Equation (18) are used. In addition, the Equations (18) and (20) are used to visualize the the relation between the PSNR and REC metric. This is done because as contrast increases more noise tends to be added. Table 1 shows the parameters of the MGRTH algorithm to be adjusted.

4.1.1. Numerical Results

In Figure 2 it can be seen that: MGRTH with ω = 0.25 has longer increasing entropy than using larger weights and from n = 7 , it already equals or exceeds average values obtained than using larger weights.
In Figure 3 it can be seen that also with ω = 0.25 , there is a higher value of PSNR and in Figure 4 a higher ratio between REC and PSNR can be observed in all iterations evaluated.

4.1.2. Visual Assessment by the Dermatologist

Figure 5 shows that as the value of the ω increases, the brightness of the image also increases. This causes bright or dark artifacts to appear. Due to this and the results obtained in the previous subsection, the value ω = 0.25 is chosen for the next experiment.
In Figure 6, we can see that as the iterations increase, distortions are introduced to the image. Compared to the original image (Figure 6a), the images in Figure 6c,d present a higher sharpness of the lesions and without much brightness. An image with too much brightness may cause dermoscopic assignments or translations of malignancy that do not correspond to the lesion. This can be considered as an artifact of the modified image. The sharpness seen in the images in Figure 6c,d is also apparent in healthy peripheral skin.
In the image in Figure 6b, the resulting image is visually similar to the original image. In the images of Figure 6e–g the brightness observed in the lesion is pronounced, and could induce errors of assessment by the dermatologist.

4.2. Comparison of the Proposed Algorithm with State-of-the-Art Algorithms

MGRTH was compared with histogram-based algorithms: HE, BBHE, MMBEBHE, and QHELC. These are good at improving the contrast and average brightness of medical images. It was also compared with competitive algorithms based on multi-scale MM: geodesic reconstruction multi-scale morphological contrast enhancement (GRMMCE) [23], and multi-scale morphological approach to local contrast enhancement by reconstruction (MMALCER) [39]. These are good at improving the local contrast of the images.
Table 2 shows the parameters of the algorithms based on multi-scale MM. The parameters ω of the algorithms presented in [23,39] are used in the reference articles.

4.2.1. Numerical Results

In Figure 7, it can be observed that as n grows and starting from n = 4 , MGRTH obtains higher image entropy with respect to the compared algorithms. This gives an indication that the proposed algorithm is good at improving image detail. In Figure 7, the entropy value of the original image I is also added.
Table 3 shows the average results obtained by the compared algorithms. For the algorithms based on the multiscale top-hat transform the value of n = 4 was considered. The best average results are highlighted in bold.
The average results in Table 3 show that:
  • MGRTH has better average performance according to the E metric, indicating that the approach enhances the details of melanoma images;
  • Among the algorithms based on the multiscale top-hat transform, MGRTH is the second best performer for PSNR. This means that it introduces low distortion to the images;
  • According to the REC metric, all compared algorithms enhance images on average.
In the Wilcoxon signed rank test, the differences of the q-pairs of observations are calculated, and based on these differences in absolute value, order ranks are assigned. In Table 4 the number of positive ranks observed is presented, i.e., the number of times that the proposal has obtained higher values of the metric than the compared algorithm, in the same way the number of negative ranks is observed, i.e., the number of times that the proposal has obtained lower values of the metric than the compared algorithm. The Wilcoxon statistic constitutes the sum of ranks (positive or negative) and for the number of pairs q 20 can be considered to be approximately normally distributed [40]. In Table 4, the Z statistic of the standard normal distribution and the significance associated with the observed Wilcoxon statistic are presented.
After analyzing the results (level of statistical significance α = 0.01), the following can be observed:
  • The proposed algorithm has obtained higher values in the E metric than the other evaluated algorithms;
  • For the REC metric, the proposed algorithm has obtained lower values than the HE, BBHE and MMBEBHE algorithms, and higher values than those obtained by QHELC and MMALCER;
  • For the PSNR metric, the proposed algorithm has obtained lower values than the GRMMCE, MMALCER, and QHELC algorithms, and higher values than those obtained by HE, BBHE, and MMBEBHE.

4.2.2. Visual Assessment by the Dermatologist

In Figure 8 and Figure 9, the images enhanced with different state-of-the-art algorithms can be visualized. The images obtained by the multiscale MM based algorithms use an iteration number n = 4 .
It can be seen that MGRTH and MMALCER are the algorithms that preserve the most features and provide the best sharpness. They also avoid adding unnecessary brightness and improve the visualization of circumscribed skin. These algorithms are possibly the most applicable for dermoscopic image assessment and classification.

5. Conclusions

In this work, a contrast and detail enhancement algorithm was presented. The proposed algorithm is based on multi-scale morphological operations. The extraction of features from the medical images is performed by combining the operations of the classic top-hat with the top-hat by reconstruction. This combination of operations is used at multiple scales, which are finally added to the image in a strategic way to enhance the useful features of the image, such as details and edges.
The numerical and visual results show that MGRTH improves the contrast of melanoma images according to REC metrics and is superior to comparative algorithms in improving details images according to E metric. Among the multiscale top-hat transform-based algorithms compared, MGRTH is the second one that introduces low distortion in the process of detail improvement and contrast enhancement.
For future work this algorithm may be useful for preprocessing images before using deep learning applications for segmentation, detection or classification purposes.

Author Contributions

Conceptualization, J.C.M.-R.; methodology, J.C.M.-R.; software, J.C.M.-R.; validation, J.L.V.N., H.L.-A., J.F., L.A.S.T.; formal analysis, M.G.-T., D.P.P.-R. and J.D.M.-R.; investigation, J.C.M.-R.; resources, J.L.V.N.; data curation, J.C.M.-R., L.R.B.P. and D.N.L.C.; writing—original draft preparation, J.C.M.-R., J.L.V.N., M.G.-T., J.F.; writing—review and editing, J.L.V.N., M.G.-T., H.L.-A., J.F., D.P.P.-R. and J.D.M.-R.; visualization, S.A.G., L.S.R., L.A.S.T.; supervision, H.L.-A.; project administration, J.L.V.N.; funding acquisition, H.L.-A., J.L.V.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CONACYT-Paraguay grant numbers PINV18-1199 and POSG17-53.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The database used in this work contains 236 color images of benign and malignant melanocytic lesions and used in [33].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, H.; Wang, Z. Perceptual Quality Assessment of Medical Images. In Encyclopedia of Biomedical Engineering; Elsevier: Amsterdam, The Netherlands, 2019; pp. 588–596. [Google Scholar]
  2. Sequeira, A.; Joao, A.; Tiago, J.; Gambaruto, A. Computational advances applied to medical image processing: An update. Open Access Bioinform. 2016, 8, 1–15. [Google Scholar] [CrossRef]
  3. Gonzalez, R.; Woods, R. Digital Image Processing, 2nd ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 2002; Volume 793. [Google Scholar]
  4. Zhang, G.; Yan, P.; Zhao, H.; Zhang, X. A Contrast Enhancement Algorithm for Low-Dose CT Images Based on Local Histogram Equalization. In Proceedings of the 2008 2nd International Conference on Bioinformatics and Biomedical Engineering, Shanghai, China, 16–18 May 2008. [Google Scholar]
  5. Veluchamy, M.; Subramani, B. Image contrast and color enhancement using adaptive gamma correction and histogram equalization. Optik 2019, 183, 329–337. [Google Scholar] [CrossRef]
  6. Kim, Y.T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron. 1997, 43, 1–8. [Google Scholar]
  7. Wang, Y.; Chen, Q.; Zhang, B. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron. 1999, 45, 68–75. [Google Scholar] [CrossRef]
  8. Chen, S.D.; Ramli, A. Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans. Consum. Electron. 2003, 49, 1310–1319. [Google Scholar] [CrossRef]
  9. Pineda, I.A.B.; Caballero, R.D.M.; Silva, J.J.C.; Román, J.C.M.; Noguera, J.L.V. Quadri-histogram equalization using cutoff limits based on the size of each histogram with preservation of average brightness. Signal Image Video Process. 2019, 13, 843–851. [Google Scholar] [CrossRef]
  10. Bai, X.; Zhou, F.; Xue, B. Image enhancement using multi scale image features extracted by top-hat transform. Opt. Laser Technol. 2012, 44, 328–336. [Google Scholar] [CrossRef]
  11. Hassanpour, H.; Samadiani, N.; Salehi, S.M. Using morphological transforms to enhance the contrast of medical images. Egypt. J. Radiol. Nucl. Med. 2015, 46, 481–489. [Google Scholar] [CrossRef]
  12. Arya, A.; Bhateja, V.; Nigam, M.; Bhadauria, A.S. Enhancement of Brain MR-T1/T2 Images Using Mathematical Morphology. In Information and Communication Technology for Sustainable Development; Springer: Singapore, 2019; pp. 833–840. [Google Scholar]
  13. Román, J.C.M.; Noguera, J.L.V.; Legal-Ayala, H.; Pinto-Roa, D.; Gomez-Guerrero, S.; García-Torres, M. Entropy and Contrast Enhancement of Infrared Thermal Images Using the Multiscale Top-Hat Transform. Entropy 2019, 21, 244. [Google Scholar] [CrossRef]
  14. Soille, P. Morphological Image Analysis; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  15. Damian, F.A.; Moldovanu, S.; Dey, N.; Ashour, A.S.; Moraru, L. Feature Selection of Non-Dermoscopic Skin Lesion Images for Nevus and Melanoma Classification. Computation 2020, 8, 41. [Google Scholar] [CrossRef]
  16. Aswini, S.; Suresh, A.; Priya, S.; Krishna, B.V.S. Retinal Vessel Segmentation Using Morphological Top Hat Approach On Diabetic Retinopathy Images. In Proceedings of the 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB), Chennai, India, 27–28 February 2018. [Google Scholar] [CrossRef]
  17. Mukhopadhyay, S.; Chanda, B. A multiscale morphological approach to local contrast enhancement. Signal Process. 2000, 80, 685–696. [Google Scholar] [CrossRef]
  18. Peng, B.; Wang, Y.; Yang, X. A Multiscale Morphological Approach to Local Contrast Enhancement for Ultrasound Images. In Proceedings of the 2010 International Conference on Computational and Information Sciences, Chengdu, China, 17–19 December 2010. [Google Scholar]
  19. Kamra, A.; Jain, V.K. Enhancement of subtle signs in mammograms using multiscale morphological approach. In Proceedings of the 2013 IEEE Point-of-Care Healthcare Technologies (PHT), Bangalore, India, 16–18 January 2013. [Google Scholar]
  20. Bai, X. Morphological feature extraction for detail maintained image enhancement by using two types of alternating filters and threshold constrained strategy. Optik 2015, 126, 5038–5043. [Google Scholar] [CrossRef]
  21. Wang, G.; Wang, J.; Li, M.; Zheng, Y.; Wang, K. Hand Vein Image Enhancement Based on Multi-Scale Top-Hat Transform. Cybern. Inf. Technol. 2016, 16, 125–134. [Google Scholar] [CrossRef]
  22. Landini, G.; Galton, A.; Randell, D.; Fouad, S. Novel applications of discrete mereotopology to mathematical morphology. Signal Process. Image Commun. 2019, 76, 109–117. [Google Scholar] [CrossRef]
  23. Román, J.C.M.; Escobar, R.; Martínez, F.; Noguera, J.L.V.; Legal-Ayala, H.; Pinto-Roa, D.P. Medical Image Enhancement With Brightness and Detail Preserving Using Multiscale Top-hat Transform by Reconstruction. Electron. Notes Theor. Comput. Sci. 2020, 349, 69–80. [Google Scholar] [CrossRef]
  24. Román, J.C.M.; Fretes, V.R.; Adorno, C.G.; Silva, R.G.; Noguera, J.L.V.; Legal-Ayala, H.; Mello-Román, J.D.; Torres, R.D.E.; Facon, J. Panoramic Dental Radiography Image Enhancement Using Multiscale Mathematical Morphology. Sensors 2021, 21, 3110. [Google Scholar] [CrossRef]
  25. Nasr-Esfahani, E.; Samavi, S.; Karimi, N.; Soroushmehr, S.; Ward, K.; Jafari, M.; Felfeliyan, B.; Nallamothu, B.; Najarian, K. Vessel extraction in X-ray angiograms using deep learning. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016. [Google Scholar] [CrossRef]
  26. Wang, Y.L.; Mu, S.S. Edge Detection Algorithm Based on the Top-hat Operator. In DEStech Transactions on Computer Science and Engineering; DEStech Publications: Lancaster, PA, USA, 2017. [Google Scholar] [CrossRef]
  27. Bai, X. Morphological image fusion using the extracted image regions and details based on multi-scale top-hat transform and toggle contrast operator. Digit. Signal Process. 2013, 23, 542–554. [Google Scholar] [CrossRef]
  28. Fazlali, H.R.; Karimi, N.; Soroushmehr, S.M.R.; Shirani, S.; Nallamothu, B.K.; Ward, K.R.; Samavi, S.; Najarian, K. Vessel segmentation and catheter detection in X-ray angiograms using superpixels. Med. Biol. Eng. Comput. 2018, 56, 1515–1530. [Google Scholar] [CrossRef]
  29. Sengar, N.; Dutta, M.K. Automated method for hierarchal detection and grading of diabetic retinopathy. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2017, 1163, 1–11. [Google Scholar] [CrossRef]
  30. Atli, İ.; Gedik, O.S. Sine-Net: A fully convolutional deep learning architecture for retinal blood vessel segmentation. Eng. Sci. Technol. Int. J. 2021, 24, 271–283. [Google Scholar] [CrossRef]
  31. Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization. In Graphic Gems IV; Elsevier: Amsterdam, The Netherlands, 1994; pp. 474–485. [Google Scholar] [CrossRef]
  32. Lantuejoul, C.; Maisonneuve, F. Geodesic methods in quantitative image analysis. Pattern Recognit. 1984, 17, 177–187. [Google Scholar] [CrossRef]
  33. Beuren, A.T.; Janasieivicz, R.; Pinheiro, G.; Grando, N.; Facon, J. Skin melanoma segmentation by morphological approach. In Proceedings of the International Conference on Advances in Computing, Communications and Informatics—ICACCI ’12, Chennai, India, 3–5 August 2012; ACM Press: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
  34. Schneider, C.A.; Rasband, W.S.; Eliceiri, K.W. NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 2012, 9, 671–675. [Google Scholar] [CrossRef] [PubMed]
  35. Legland, D.; Arganda-Carreras, I.; Andrey, P. MorphoLibJ: Integrated library and plugins for mathematical morphology with ImageJ. Bioinformatics 2016, 32, 3532–3534. [Google Scholar] [CrossRef] [PubMed]
  36. Zhao, C.; Wang, Z.; Li, H.; Wu, X.; Qiao, S.; Sun, J. A new approach for medical image enhancement based on luminance-level modulation and gradient modulation. Biomed. Signal Process. Control 2019, 48, 189–196. [Google Scholar] [CrossRef]
  37. Salem, N.; Malik, H.; Shams, A. Medical image enhancement based on histogram algorithms. Procedia Comput. Sci. 2019, 163, 300–311. [Google Scholar] [CrossRef]
  38. Joseph, J.; Periyasamy, R. A fully customized enhancement scheme for controlling brightness error and contrast in magnetic resonance images. Biomed. Signal Process. Control 2018, 39, 271–283. [Google Scholar] [CrossRef]
  39. Bai, X. Image enhancement through contrast enlargement using the image regions extracted by multiscale top-hat by reconstruction. Optik 2013, 124, 4421–4424. [Google Scholar] [CrossRef]
  40. Walpole, R.E.; Myers, R.H.; Myers, S.L.; Cruz, R. Probabilidad y Estadística; McGraw-Hill: Mexico City, Mexico, 1992; Volume 624. [Google Scholar]
Figure 1. Dermoscopy images. Visual results obtained by MGRTH with n = 3 and ω = 0.25 .
Figure 1. Dermoscopy images. Visual results obtained by MGRTH with n = 3 and ω = 0.25 .
Applsci 11 09302 g001
Figure 2. Comparison with respect to metric E.
Figure 2. Comparison with respect to metric E.
Applsci 11 09302 g002
Figure 3. Comparison with respect to metric PSNR.
Figure 3. Comparison with respect to metric PSNR.
Applsci 11 09302 g003
Figure 4. Ratio between PSNR and REC.
Figure 4. Ratio between PSNR and REC.
Applsci 11 09302 g004
Figure 5. Visual results obtained by MGRTH with different ω .
Figure 5. Visual results obtained by MGRTH with different ω .
Applsci 11 09302 g005
Figure 6. Visual results obtained by MGRTH in the different iterations.
Figure 6. Visual results obtained by MGRTH in the different iterations.
Applsci 11 09302 g006aApplsci 11 09302 g006b
Figure 7. Comparison with respect to metric E.
Figure 7. Comparison with respect to metric E.
Applsci 11 09302 g007
Figure 8. Malignant image. Visual results obtained by the algorithms.
Figure 8. Malignant image. Visual results obtained by the algorithms.
Applsci 11 09302 g008
Figure 9. Benign image. Visual results obtained by the algorithms.
Figure 9. Benign image. Visual results obtained by the algorithms.
Applsci 11 09302 g009
Table 1. Parameters of the proposed algorithm.
Table 1. Parameters of the proposed algorithm.
AlgorithmInitial Structuring Element (Disk)Number of IterationsContrast Setting Weight
Hn ω
MGRTH1[2–20][0.25,0.50,0.75]
Table 2. Parameters of the algorithms based on multi-scale MM.
Table 2. Parameters of the algorithms based on multi-scale MM.
AlgorithmsInitial Structuring Element (Disk)Number of IterationsContrast Setting Weight
HH′n ω
MGRTH1-[2–20]0.25
GRMMCE [23]1-[2–20]1
MMALCER [39]1-[2–20]0.5
Table 3. Average results obtained by the compared algorithms.
Table 3. Average results obtained by the compared algorithms.
AlgorithmsERECPSNR
I6.581--
MGRTH6.8381.04928.018
GRMMCE6.7821.05428.109
MMALCER6.7971.04429.032
HE6.4081.24912.420
BBHE6.4441.24117.191
MMBEBHE6.4111.18620.223
QHELC6.5541.02438.342
Table 4. The Wilcoxon signed rank test for paired observations.
Table 4. The Wilcoxon signed rank test for paired observations.
AlgorithmsMetrics
ERECPSNR
MGRTH-INegative ranks33--
Positive ranks203--
Z−11.55--
Sig. asymptotic (bilateral)≈0--
MGRTH-GRMMCENegative ranks40160133
Positive ranks19676103
Z−10.871−6.987−1.63
Sig. asymptotic (bilateral)≈0≈00.103
MGRTH-MMALCERNegative ranks3229232
Positive ranks2042074
Z−10.188−11.524−13.267
Sig. asymptotic (bilateral)≈0≈0≈0
MGRTH-HENegative ranks62184
Positive ranks23018232
Z−13.228−12.981−13.287
Sig. asymptotic (bilateral)≈0≈0≈0
MGRTH-BBHENegative ranks821415
Positive ranks22822221
Z−13.156−12.789−12.698
Sig. asymptotic (bilateral)≈0≈0≈0
MGRTH-MMBEBHENegative ranks921230
Positive ranks22724206
Z−13.167−12.547−11.935
Sig. asymptotic (bilateral)≈0≈0≈0
MGRTH-QHELCNegative ranks2448232
Positive ranks2121884
Z−12.182−10.467−13.303
Sig. asymptotic (bilateral)≈0≈0≈0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop