# Dermoscopy Images Enhancement via Multi-Scale Morphological Operations

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

^{*}

Next Article in Journal

Previous Article in Journal

Computer Engineer Department, Universidad Americana, Asunción 1206, Paraguay

Facultad Politécnica, Universidad Nacional de Asunción, San Lorenzo 111421, Paraguay

Data Science and Big Data Lab, Universidad Pablo de Olavide, 41013 Seville, Spain

Department of Computer and Electronics, Universidade Federal do Espírito Santo, São Mateus 29932-540, ES, Brazil

Sígnal Theory and communications Department, Universitat Politècnica de Catalunya, 08034 Barcelona, Spain

Hospital de Clínicas, Facultad de Ciencias Médicas, Universidad Nacional de Asunción, San Lorenzo 111421, Paraguay

Facultad de Ciencias Exactas y Tecnológicas, Universidad Nacional de Conepción, Concepción 010123, Paraguay

Author to whom correspondence should be addressed.

Academic Editor: Cecilia Di Ruberto

Received: 9 September 2021
/
Revised: 29 September 2021
/
Accepted: 30 September 2021
/
Published: 7 October 2021

(This article belongs to the Topic Medical Image Analysis)

Skin dermoscopy images frequently lack contrast caused by varying light conditions. Indeed, often low contrast is seen in dermoscopy images of melanoma, causing the lesion to blend in with the surrounding skin. In addition, the low contrast prevents certain details from being seen in the image. Therefore, it is necessary to design an approach that can enhance the contrast and details of dermoscopic images. In this work, we propose a multi-scale morphological approach to reduce the impacts of lack of contrast and to enhance the quality of the images. By top-hat reconstruction, the local bright and dark features are extracted from the image. The local bright features are added and the dark features are subtracted from the image. In this way, images with higher contrast and detail are obtained. The proposed approach was applied to a database of 236 color images of benign and malignant melanocytic lesions. The results show that the multi-scale morphological approach by reconstruction is a competitive algorithm since it achieved a very satisfactory level of contrast enhancement and detail enhancement in dermoscopy images.

Medical images are the visual representations of the interior of a body. These visual representations have facilitated some health care tasks such as diagnosing diseases. Despite technological advances in recent years, image acquisition, storage, or transmission still suffer from various types of degradation [1]. These factors can cause inefficient or inaccurate diagnoses, thus compromising the healing of patients.

Many techniques have been proposed to improve the contrast of medical images [2]. The traditional histogram equalization (HE) [3], one of the most popular techniques, was the first attempt to automatically improve contrast. HE distributes the gray levels within the image (each gray level has an equal chance to occur) to enhance contrast and brightness. Studies have shown that HE introduces saturation and over-enhancement in the images [4,5]. Several improved techniques have been proposed to maintain average image brightness, reducing saturation effects, thus avoiding unnatural image enhancement. Some of these techniques are: brightness preserving bi-histogram equalization (BBHE) [6], dual sub-image histogram equalization (DSIHE) [7], minimum mean brightness error bi-histogram equalization (MMBEBHE) [8], and quadri-histogram equalization with limited contrast (QHELC) [9].

With the emergence of mathematical morphology (MM) based approaches in recent decades, new techniques have been developed for contrast enhancement [10,11,12,13]. Due to its ability to extract dark and light features from images using structuring elements of different shapes and sizes [14], the top-hat transformation has received a lot of attention. In [15], the top-hat transformation was used to correct the illumination of images with melanocytic lesions as a preprocessing for a subsequent feature extraction study. In [16], a method for segmentation of retinal blood vessels is presented. Vessel enhancement is performed using the contrast enhancement technique based on the top-hat transform.

Various authors have proposed to use a multi-scale approach, called multi-scale top-hat transformation (MTH). An advantage of MTH is that it allows to process the image content from the most global to the most detailed level. Several works propose to improve different types of medical images by integrating MTH in a morphological based image enhancement approach [10,17,18,19,20,21,22,23,24]. Currently, in the field of computer vision, MTH-based algorithms are used as a preliminary step for other applications based on artificial intelligence. For example, in [25], a deep learning approach using convolutional neural networks was proposed to detect vessel regions in angiography images. In this work, the multi-scale top-hat transform for contrast enhancement (MSTH) algorithm [10] was used to preprocess the images by enhancing their contrast. In [26], a method for edge detection in images based on top-hat operators with multidirectional and multiscale structuring elements was proposed.

A multimodal medical image fusion scheme based on multiscale top-hat transform combined with morphological contrast operators is presented in [27].

In [28], an automatic coronary artery segmentation approach was proposed. In this work, in the preprocessing stage, the input image was processed with the MSTH algorithm for better segmentation of coronary arteries. In [29], MSTH was used as the first step of an algorithm for detecting bright lesions in retinal images. In [30] proposed Sine-Net, an automated tool based on a fully convolutional deep learning architecture for the segmentation of blood vessels in retinal images. The architecture obtained better segmentation results on three databases by using the combination of the MSTH and contrast-limited adaptive histogram equalization (CLAHE) [31] algorithms in the preprocessing.

In this work an improvement of MTH is proposed. The underlying idea is to replace the opening and the closing operation with morphological filters by reconstruction. This replacement is done because the morphological operators by reconstruction are very attractive by avoiding damaging the image contour, the edge and many other important information of the medical image.

For such purpose MTH integrates the concept of geodesic reconstruction [32]. By combining the advantages of morphological reconstruction with MTH ability to extract dark and bright characteristics, the resulting strategy is a multi-scale morphological approach capable of enhancing medical images. Experiments show that the resulting skin dermoscopy images have less distortion, greater detail accuracy, and better contrast than different image enhancement approaches.

We can summarize the contributions in this work as follows:

- (a)
- Propose a new MTH based strategy which incorporates the geodesic reconstruction concept in combination with a mathematical morphological approach;
- (b)
- Design a novel contrast enhancement algorithm based on the proposed MTH approach.

In mathematical morphology (MM) the aim is to analyze and extract unknown structures contained in an image. For such purpose, it uses a structuring element of known shape and size, and the erosion and dilation operators [14]. By providing a wide range of filters represented by the combination of these two basic operators, MM offers efficient tools and represents a relatively simple and powerful tool in terms of image analysis.

Given an image I, the morphological dilation ${\delta}_{H}\left(I\right)$ and erosion ${\epsilon}_{H}\left(I\right)$ of I using the structuring element H at the pixel x with respect to the structuring element H of domain ${D}_{H}$, are defined as follows [14]:

$$\begin{array}{c}\hfill {\delta}_{H}\left(I\right)\left(x\right)=max\{I(x-y),\forall y\in {D}_{H}\},\\ \hfill {\epsilon}_{H}\left(I\right)\left(x\right)=min\{I(x+y),\forall y\in {D}_{H}\}.\end{array}$$

Opening $\gamma (I,mH)$ operator is the sequential combination of erosion ${\epsilon}_{mH}\left(I\right)$ with $mH$ (structuring element H of size m) followed by dilation ${\delta}_{\tilde{mH}}\left(I\right)$. Closing, on the other hand, $\varphi (I,mH)$ is the sequential combination of dilation ${\delta}_{mH}\left(I\right)$ with $mH$ followed by erosion ${\epsilon}_{\tilde{mH}}\left(I\right)$. Both operators are defined as [14]:
where m is the size of the structuring element and $\tilde{mH}$ the reflection of $mH$. In the case of symmetrical structuring element, $mH=\tilde{mH}$.

$$\begin{array}{ccc}\hfill \gamma (I,mH)& =& {\delta}_{\tilde{mH}}\left({\epsilon}_{mH}\left(I\right)\right),\hfill \\ \hfill \varphi (I,mH)& =& {\epsilon}_{\tilde{mH}}\left({\delta}_{mH}\left(I\right)\right),\hfill \end{array}$$

Viewing an image as a two-dimensional surface in a three-dimensional space, applying opening (closing) has the consequence of removing peaks (or filling valleys) smaller than the structuring element.

By taking the difference between the original image and its opening, some different peaks can be extracted. In a dual way, we can extract different valleys by making the difference between a closed image and the original one. Top-hat transform represents the mathematical formalism of this idea. During top-hat transform by opening ($WTH$) [14] is the rest of the original image I and its opening $\gamma (I,mH)$, top-hat transform by closing ($BTH$) [14] is the subtraction between the morphological closing $\varphi (I,mH)$ and the original image I, defined as follows:
where m is the size of the structuring element.

$$\begin{array}{ccc}\hfill WTH(I,mH)& =& I-\gamma (I,mH),\hfill \\ \hfill BTH(I,mH)& =& \varphi (I,mH)-I,\hfill \end{array}$$

In geodesic transformations, two equally sized input images are used, denoted as marker and mask. The first image (marker) is modified by a morphological transformation and restricted below (geodesic dilation) or above (geodesic erosion) the second image (mask) [14].

Let J and I be the marker and mask images, respectively, with the same domain (${D}_{J}$ = ${D}_{I}$). Geodesic dilation ${\delta}_{I}\left(J\right)$ and erosion ${\epsilon}_{I}\left(J\right)$ can be defined as [14]:
where ∧ is the minimum between the pixels of $J\left(x\right)$ and $I\left(x\right)$ and ∨ is the maximum between the pixels of $J\left(x\right)$ and $I\left(x\right)$. If we perform k times the geodesic dilation or erosion of J with respect to I, we have to ${\delta}_{I}^{\left(k\right)}\left(J\right)={\delta}_{I}^{\left(1\right)}\left(J\right)\left[{\delta}_{I}^{(k-1)}\left(J\right)\right]$ or ${\epsilon}_{I}^{\left(k\right)}\left(J\right)={\epsilon}_{I}^{\left(1\right)}\left(J\right)\left[{\epsilon}_{I}^{(k-1)}\left(J\right)\right]$.

$$\begin{array}{ccc}\hfill {\delta}_{I}^{\left(1\right)}\left(J\right)& =& {\delta}_{I}\left(J\right)\wedge I\mathrm{with}J\left(x\right)\le I\left(x\right),\hfill \\ \hfill {\epsilon}_{I}^{\left(1\right)}\left(J\right)& =& {\epsilon}_{I}\left(J\right)\vee I\mathrm{with}J\left(x\right)\ge I\left(x\right),\hfill \end{array}$$

In practice, we can define geodesic reconstruction ${\rho}_{I}\left(J\right)$ and dual geodesic reconstruction ${\rho}_{I}^{*}\left(J\right)$ as follows:

$$\begin{array}{ccc}\hfill {\rho}_{I}\left(J\right)& =& {\delta}_{I}^{\left(i\right)}\left(J\right)\mathrm{with}{i}\mathrm{such}\mathrm{as}{\delta}_{I}^{\left(i\right)}\left(J\right)={\delta}_{I}^{(i+1)}\left(J\right),\hfill \\ \hfill {\rho}_{I}^{*}\left(J\right)& =& {\epsilon}_{I}^{\left(i\right)}\left(J\right)\mathrm{with}{i}\mathrm{such}\mathrm{as}{\epsilon}_{I}^{\left(i\right)}\left(J\right)={\epsilon}_{I}^{(i+1)}\left(J\right)\hfill \end{array}$$

Similar to the standard opening and closing, opening ${\gamma}_{\rho}^{\left(m\right)}$ and closing ${\varphi}_{\rho}^{\left(m\right)}$ by reconstruction of an image I can be defined as follows [14]:
where I is the mask image, ${\epsilon}_{mH}\left(I\right)$ and ${\delta}_{mH}\left(I\right)$ are the markers image and m is the size of the structuring element.

$$\begin{array}{ccc}\hfill {\gamma}_{\rho}^{\left(m\right)}\left(I\right)& =& {\rho}_{I}\left({\epsilon}_{mH}\left(I\right)\right),\hfill \\ \hfill {\varphi}_{\rho}^{\left(m\right)}\left(I\right)& =& {\rho}_{I}^{*}\left({\delta}_{mH}\left(I\right)\right),\hfill \end{array}$$

In the full experiment, the structuring element has the shape of a disk, and m indicates the size of the radius.

The use of morphological operators by reconstruction has shown that, contrary to the standard ones, they remove details without modifying the structure of remaining objects. Another significant advantage is that geodesic reconstructions use an elementary isotropic structuring element and it is not necessary to specify sizes like in standard morphological operators. Analogously to the standard top-hat transform, it is possible to preserve or remove structures through geodesic reconstruction (dual geodesic reconstruction) that will have the role of opening (closing).

Although structures removed in the image I from the opening by reconstruction can be recovered with the white top-hat transform by reconstruction (RWTH), similarly, we can recover structures removed from the closing with the dark top-hat transformation by reconstruction (RBTH) as follows [14]:

$$\begin{array}{ccc}\hfill RWT{H}^{\left(m\right)}\left(I\right)& =& I-{\gamma}_{\rho}^{\left(m\right)}\left(I\right),\hfill \\ \hfill RBT{H}^{\left(m\right)}\left(I\right)& =& {\varphi}_{\rho}^{\left(m\right)}\left(I\right)-I.\hfill \end{array}$$

The proposed algorithm, called Multi-scale Geodesic Reconstruction based Top-Hat transform (MGRTH), is presented in this section. Additionally, all operations are presented in detail, step by step.

Let I be the image, H be the structuring element and n be the number of iterations. The proposed algorithm is divided into five stages.

**First stage:**- the bright structures at level i are extracted by $RWT{H}_{m}$ as follows:$$RWT{H}_{m}={\rho}_{\left(WTH\right(I,mH\left)\right)}\left(RWT{H}^{\left(m\right)}\left(I\right)\right),$$$$RBT{H}_{m}={\rho}_{\left(BTH\right(I,mH\left)\right)}(RBT{H}^{\left(m\right)}\left(I\right)),$$
**Second stage:**- the light residues $S{W}_{m}$ are extracted from the dark structures at levels m and $m-1$ and the dark residues $S{B}_{m}$ are extracted from the light structures at levels m and $m-1$ as follows:$$S{W}_{m-1}=\left\{\begin{array}{c}RWT{H}_{m}-RWT{H}_{m-1}\phantom{\rule{28.45274pt}{0ex}}\mathrm{case}m=2\\ RWT{H}_{m}-S{W}_{m-2}\phantom{\rule{36.98866pt}{0ex}}\mathrm{for}m2\end{array}\right.,$$$$S{B}_{m-1}=\left\{\begin{array}{c}RBT{H}_{m}-RBT{H}_{m-1}\phantom{\rule{28.45274pt}{0ex}}\mathrm{case}m=2\\ RBT{H}_{m}-S{B}_{m-2}\phantom{\rule{36.98866pt}{0ex}}\mathrm{for}m2\end{array}\right..$$
**Third stage:**- The maximum bright scaled details are computed from the bright structures extracted at the first stage by $RWT{H}_{m}$, and the maximum dark scaled details are computed from the dark structures extracted at the first stage by $RBT{H}_{m}$ as follows:$$SRWTH=\sum _{1\le m\le n}\left\{RWT{H}_{m}\right\},$$$$SRBTH=\sum _{1\le m\le n}\left\{RBT{H}_{m}\right\}.$$
**Fourth stage:**- The maximum light residues are computed from the light residues extracted at the second stage by $S{W}_{m}$, and the maximum dark residues are computed from the dark residues extracted at the second stage by $S{B}_{m}$ as follows [13]:$$SSW=\sum _{2\le m\le n}\left\{S{W}_{m-1}\right\},$$$$SSB=\sum _{2\le m\le n}\left\{S{B}_{m-1}\right\}.$$
**Final stage:**- the enhanced image ${I}_{E}$ is performed per pixel as follows:$${I}_{E}=I+\omega \times max(SRWTH,SSW)-\omega \times max(SRBTH,SSB),$$Figure 1 shows the original melanoma images on the left (a,c) and the MGRTH-enhanced images on the right (b,d).

This section describes the experiments conducted to quantify the relative performance of the proposed algorithm. The database used in these experiments contains 236 color images of benign and malignant melanocytic lesions and used in [33]. For the tests on the RGB images, first the RGB images were converted to HSV images. Then, the algorithms and evaluations were applied to the V-channel of the images. Finally, the HSV images were converted to RGB images.

All algorithms were implemented using the ImageJ [34] library, for algorithms based on MM an extra library called MorphoLibJ [35] was used.

The results were evaluated with the metrics:

- Entropy (E) [13,21,36]: E is used to measure the details in the image. E is defined as,$$E\left(I\right)=-\sum _{k=0}^{L-1}P\left(k\right)lo{g}_{2}\left(P\left(k\right)\right),$$
- Peak Signal-to-Noise Ratio (PSNR) [10,21,37]: PSNR measures how much distortion is added to the image in the contrast enhancement process. PSNR is defined as,$$PSNR(I,{I}_{E})=10\times lo{g}_{10}{\displaystyle \frac{{(L-1)}^{2}}{MSE(I,{I}_{E})}},$$$$MSE(I,{I}_{E})={\displaystyle \frac{1}{M\times N}}\sum _{x=1}^{M}\sum _{y=1}^{N}{(I(x,y)-{I}_{E}(x,y))}^{2}.$$After the enhancement process, an image is considered to have low distortion if it has a high PSNR value;
- Relative Enhancement in Contrast (REC) [36,38]: REC measures the contrast of the enhanced melanoma image. REC is defined as,$$REC={\displaystyle \frac{C\left({I}_{E}\right)}{C\left(I\right)}},$$$$C\left(I\right)=20\times log\left[{\displaystyle \frac{1}{MN}}\sum _{x=1}^{M}\sum _{y=1}^{N}\left({\left(I(x,y)\right)}^{2}-{\mu}^{2}\right)\right],$$$$\mu ={\displaystyle \frac{1}{MN}}\sum _{x=1}^{M}\sum _{y=1}^{N}I(x,y).$$After image processing, if the REC value is higher than 1, the enhanced image is considered to have enhanced contrast.

To test the performance of the proposed method, we have considered two different experiments:

- In the first part (Section 4.1), we performed parameter tuning to find good parameter values $\omega $ and n of the proposed algorithm. For this purpose, a comparison of the results obtained with respect to the number of iterations and the contrast adjustment weight was performed. The objective of this experiment was to observe the performance of the proposal with respect to the E, PSNR, and REC metrics;
- Then, in the second part (Section 4.2) the proposed algorithm was compared with algorithms based on the multiscale top-hat transform and algorithms based on histogram equalization.

In both experiments, apart from the numerical results, a visual assessment of the dermatologist is presented.

In this subsection, the parameter settings used in the MGRTH algorithm in relation to the number of iterations n and the contrast adjustment weight $\omega $ are described. For this purpose, the Shannon entropy Equation (17) and the PSNR Equation (18) are used. In addition, the Equations (18) and (20) are used to visualize the the relation between the PSNR and REC metric. This is done because as contrast increases more noise tends to be added. Table 1 shows the parameters of the MGRTH algorithm to be adjusted.

In Figure 2 it can be seen that: MGRTH with $\omega =0.25$ has longer increasing entropy than using larger weights and from $n=7$, it already equals or exceeds average values obtained than using larger weights.

Figure 5 shows that as the value of the $\omega $ increases, the brightness of the image also increases. This causes bright or dark artifacts to appear. Due to this and the results obtained in the previous subsection, the value $\omega =0.25$ is chosen for the next experiment.

In Figure 6, we can see that as the iterations increase, distortions are introduced to the image. Compared to the original image (Figure 6a), the images in Figure 6c,d present a higher sharpness of the lesions and without much brightness. An image with too much brightness may cause dermoscopic assignments or translations of malignancy that do not correspond to the lesion. This can be considered as an artifact of the modified image. The sharpness seen in the images in Figure 6c,d is also apparent in healthy peripheral skin.

MGRTH was compared with histogram-based algorithms: HE, BBHE, MMBEBHE, and QHELC. These are good at improving the contrast and average brightness of medical images. It was also compared with competitive algorithms based on multi-scale MM: geodesic reconstruction multi-scale morphological contrast enhancement (GRMMCE) [23], and multi-scale morphological approach to local contrast enhancement by reconstruction (MMALCER) [39]. These are good at improving the local contrast of the images.

Table 2 shows the parameters of the algorithms based on multi-scale MM. The parameters $\omega $ of the algorithms presented in [23,39] are used in the reference articles.

In Figure 7, it can be observed that as n grows and starting from $n=4$, MGRTH obtains higher image entropy with respect to the compared algorithms. This gives an indication that the proposed algorithm is good at improving image detail. In Figure 7, the entropy value of the original image I is also added.

Table 3 shows the average results obtained by the compared algorithms. For the algorithms based on the multiscale top-hat transform the value of $n=4$ was considered. The best average results are highlighted in bold.

The average results in Table 3 show that:

- MGRTH has better average performance according to the E metric, indicating that the approach enhances the details of melanoma images;
- Among the algorithms based on the multiscale top-hat transform, MGRTH is the second best performer for PSNR. This means that it introduces low distortion to the images;
- According to the REC metric, all compared algorithms enhance images on average.

In the Wilcoxon signed rank test, the differences of the q-pairs of observations are calculated, and based on these differences in absolute value, order ranks are assigned. In Table 4 the number of positive ranks observed is presented, i.e., the number of times that the proposal has obtained higher values of the metric than the compared algorithm, in the same way the number of negative ranks is observed, i.e., the number of times that the proposal has obtained lower values of the metric than the compared algorithm. The Wilcoxon statistic constitutes the sum of ranks (positive or negative) and for the number of pairs $q\ge 20$ can be considered to be approximately normally distributed [40]. In Table 4, the Z statistic of the standard normal distribution and the significance associated with the observed Wilcoxon statistic are presented.

After analyzing the results (level of statistical significance $\alpha $ = 0.01), the following can be observed:

- The proposed algorithm has obtained higher values in the E metric than the other evaluated algorithms;
- For the REC metric, the proposed algorithm has obtained lower values than the HE, BBHE and MMBEBHE algorithms, and higher values than those obtained by QHELC and MMALCER;
- For the PSNR metric, the proposed algorithm has obtained lower values than the GRMMCE, MMALCER, and QHELC algorithms, and higher values than those obtained by HE, BBHE, and MMBEBHE.

In Figure 8 and Figure 9, the images enhanced with different state-of-the-art algorithms can be visualized. The images obtained by the multiscale MM based algorithms use an iteration number $n=4$.

It can be seen that MGRTH and MMALCER are the algorithms that preserve the most features and provide the best sharpness. They also avoid adding unnecessary brightness and improve the visualization of circumscribed skin. These algorithms are possibly the most applicable for dermoscopic image assessment and classification.

In this work, a contrast and detail enhancement algorithm was presented. The proposed algorithm is based on multi-scale morphological operations. The extraction of features from the medical images is performed by combining the operations of the classic top-hat with the top-hat by reconstruction. This combination of operations is used at multiple scales, which are finally added to the image in a strategic way to enhance the useful features of the image, such as details and edges.

The numerical and visual results show that MGRTH improves the contrast of melanoma images according to REC metrics and is superior to comparative algorithms in improving details images according to E metric. Among the multiscale top-hat transform-based algorithms compared, MGRTH is the second one that introduces low distortion in the process of detail improvement and contrast enhancement.

For future work this algorithm may be useful for preprocessing images before using deep learning applications for segmentation, detection or classification purposes.

Conceptualization, J.C.M.-R.; methodology, J.C.M.-R.; software, J.C.M.-R.; validation, J.L.V.N., H.L.-A., J.F., L.A.S.T.; formal analysis, M.G.-T., D.P.P.-R. and J.D.M.-R.; investigation, J.C.M.-R.; resources, J.L.V.N.; data curation, J.C.M.-R., L.R.B.P. and D.N.L.C.; writing—original draft preparation, J.C.M.-R., J.L.V.N., M.G.-T., J.F.; writing—review and editing, J.L.V.N., M.G.-T., H.L.-A., J.F., D.P.P.-R. and J.D.M.-R.; visualization, S.A.G., L.S.R., L.A.S.T.; supervision, H.L.-A.; project administration, J.L.V.N.; funding acquisition, H.L.-A., J.L.V.N. All authors have read and agreed to the published version of the manuscript.

This research was funded by CONACYT-Paraguay grant numbers PINV18-1199 and POSG17-53.

Not applicable.

Not applicable.

The database used in this work contains 236 color images of benign and malignant melanocytic lesions and used in [33].

The authors declare no conflict of interest.

- Liu, H.; Wang, Z. Perceptual Quality Assessment of Medical Images. In Encyclopedia of Biomedical Engineering; Elsevier: Amsterdam, The Netherlands, 2019; pp. 588–596. [Google Scholar]
- Sequeira, A.; Joao, A.; Tiago, J.; Gambaruto, A. Computational advances applied to medical image processing: An update. Open Access Bioinform.
**2016**, 8, 1–15. [Google Scholar] [CrossRef] - Gonzalez, R.; Woods, R. Digital Image Processing, 2nd ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 2002; Volume 793. [Google Scholar]
- Zhang, G.; Yan, P.; Zhao, H.; Zhang, X. A Contrast Enhancement Algorithm for Low-Dose CT Images Based on Local Histogram Equalization. In Proceedings of the 2008 2nd International Conference on Bioinformatics and Biomedical Engineering, Shanghai, China, 16–18 May 2008. [Google Scholar]
- Veluchamy, M.; Subramani, B. Image contrast and color enhancement using adaptive gamma correction and histogram equalization. Optik
**2019**, 183, 329–337. [Google Scholar] [CrossRef] - Kim, Y.T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron.
**1997**, 43, 1–8. [Google Scholar] - Wang, Y.; Chen, Q.; Zhang, B. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron.
**1999**, 45, 68–75. [Google Scholar] [CrossRef] - Chen, S.D.; Ramli, A. Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans. Consum. Electron.
**2003**, 49, 1310–1319. [Google Scholar] [CrossRef] - Pineda, I.A.B.; Caballero, R.D.M.; Silva, J.J.C.; Román, J.C.M.; Noguera, J.L.V. Quadri-histogram equalization using cutoff limits based on the size of each histogram with preservation of average brightness. Signal Image Video Process.
**2019**, 13, 843–851. [Google Scholar] [CrossRef] - Bai, X.; Zhou, F.; Xue, B. Image enhancement using multi scale image features extracted by top-hat transform. Opt. Laser Technol.
**2012**, 44, 328–336. [Google Scholar] [CrossRef] - Hassanpour, H.; Samadiani, N.; Salehi, S.M. Using morphological transforms to enhance the contrast of medical images. Egypt. J. Radiol. Nucl. Med.
**2015**, 46, 481–489. [Google Scholar] [CrossRef] - Arya, A.; Bhateja, V.; Nigam, M.; Bhadauria, A.S. Enhancement of Brain MR-T1/T2 Images Using Mathematical Morphology. In Information and Communication Technology for Sustainable Development; Springer: Singapore, 2019; pp. 833–840. [Google Scholar]
- Román, J.C.M.; Noguera, J.L.V.; Legal-Ayala, H.; Pinto-Roa, D.; Gomez-Guerrero, S.; García-Torres, M. Entropy and Contrast Enhancement of Infrared Thermal Images Using the Multiscale Top-Hat Transform. Entropy
**2019**, 21, 244. [Google Scholar] [CrossRef] - Soille, P. Morphological Image Analysis; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
- Damian, F.A.; Moldovanu, S.; Dey, N.; Ashour, A.S.; Moraru, L. Feature Selection of Non-Dermoscopic Skin Lesion Images for Nevus and Melanoma Classification. Computation
**2020**, 8, 41. [Google Scholar] [CrossRef] - Aswini, S.; Suresh, A.; Priya, S.; Krishna, B.V.S. Retinal Vessel Segmentation Using Morphological Top Hat Approach On Diabetic Retinopathy Images. In Proceedings of the 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB), Chennai, India, 27–28 February 2018. [Google Scholar] [CrossRef]
- Mukhopadhyay, S.; Chanda, B. A multiscale morphological approach to local contrast enhancement. Signal Process.
**2000**, 80, 685–696. [Google Scholar] [CrossRef] - Peng, B.; Wang, Y.; Yang, X. A Multiscale Morphological Approach to Local Contrast Enhancement for Ultrasound Images. In Proceedings of the 2010 International Conference on Computational and Information Sciences, Chengdu, China, 17–19 December 2010. [Google Scholar]
- Kamra, A.; Jain, V.K. Enhancement of subtle signs in mammograms using multiscale morphological approach. In Proceedings of the 2013 IEEE Point-of-Care Healthcare Technologies (PHT), Bangalore, India, 16–18 January 2013. [Google Scholar]
- Bai, X. Morphological feature extraction for detail maintained image enhancement by using two types of alternating filters and threshold constrained strategy. Optik
**2015**, 126, 5038–5043. [Google Scholar] [CrossRef] - Wang, G.; Wang, J.; Li, M.; Zheng, Y.; Wang, K. Hand Vein Image Enhancement Based on Multi-Scale Top-Hat Transform. Cybern. Inf. Technol.
**2016**, 16, 125–134. [Google Scholar] [CrossRef] - Landini, G.; Galton, A.; Randell, D.; Fouad, S. Novel applications of discrete mereotopology to mathematical morphology. Signal Process. Image Commun.
**2019**, 76, 109–117. [Google Scholar] [CrossRef] - Román, J.C.M.; Escobar, R.; Martínez, F.; Noguera, J.L.V.; Legal-Ayala, H.; Pinto-Roa, D.P. Medical Image Enhancement With Brightness and Detail Preserving Using Multiscale Top-hat Transform by Reconstruction. Electron. Notes Theor. Comput. Sci.
**2020**, 349, 69–80. [Google Scholar] [CrossRef] - Román, J.C.M.; Fretes, V.R.; Adorno, C.G.; Silva, R.G.; Noguera, J.L.V.; Legal-Ayala, H.; Mello-Román, J.D.; Torres, R.D.E.; Facon, J. Panoramic Dental Radiography Image Enhancement Using Multiscale Mathematical Morphology. Sensors
**2021**, 21, 3110. [Google Scholar] [CrossRef] - Nasr-Esfahani, E.; Samavi, S.; Karimi, N.; Soroushmehr, S.; Ward, K.; Jafari, M.; Felfeliyan, B.; Nallamothu, B.; Najarian, K. Vessel extraction in X-ray angiograms using deep learning. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016. [Google Scholar] [CrossRef]
- Wang, Y.L.; Mu, S.S. Edge Detection Algorithm Based on the Top-hat Operator. In DEStech Transactions on Computer Science and Engineering; DEStech Publications: Lancaster, PA, USA, 2017. [Google Scholar] [CrossRef]
- Bai, X. Morphological image fusion using the extracted image regions and details based on multi-scale top-hat transform and toggle contrast operator. Digit. Signal Process.
**2013**, 23, 542–554. [Google Scholar] [CrossRef] - Fazlali, H.R.; Karimi, N.; Soroushmehr, S.M.R.; Shirani, S.; Nallamothu, B.K.; Ward, K.R.; Samavi, S.; Najarian, K. Vessel segmentation and catheter detection in X-ray angiograms using superpixels. Med. Biol. Eng. Comput.
**2018**, 56, 1515–1530. [Google Scholar] [CrossRef] - Sengar, N.; Dutta, M.K. Automated method for hierarchal detection and grading of diabetic retinopathy. Comput. Methods Biomech. Biomed. Eng. Imaging Vis.
**2017**, 1163, 1–11. [Google Scholar] [CrossRef] - Atli, İ.; Gedik, O.S. Sine-Net: A fully convolutional deep learning architecture for retinal blood vessel segmentation. Eng. Sci. Technol. Int. J.
**2021**, 24, 271–283. [Google Scholar] [CrossRef] - Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization. In Graphic Gems IV; Elsevier: Amsterdam, The Netherlands, 1994; pp. 474–485. [Google Scholar] [CrossRef]
- Lantuejoul, C.; Maisonneuve, F. Geodesic methods in quantitative image analysis. Pattern Recognit.
**1984**, 17, 177–187. [Google Scholar] [CrossRef] - Beuren, A.T.; Janasieivicz, R.; Pinheiro, G.; Grando, N.; Facon, J. Skin melanoma segmentation by morphological approach. In Proceedings of the International Conference on Advances in Computing, Communications and Informatics—ICACCI ’12, Chennai, India, 3–5 August 2012; ACM Press: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
- Schneider, C.A.; Rasband, W.S.; Eliceiri, K.W. NIH Image to ImageJ: 25 years of image analysis. Nat. Methods
**2012**, 9, 671–675. [Google Scholar] [CrossRef] [PubMed] - Legland, D.; Arganda-Carreras, I.; Andrey, P. MorphoLibJ: Integrated library and plugins for mathematical morphology with ImageJ. Bioinformatics
**2016**, 32, 3532–3534. [Google Scholar] [CrossRef] [PubMed] - Zhao, C.; Wang, Z.; Li, H.; Wu, X.; Qiao, S.; Sun, J. A new approach for medical image enhancement based on luminance-level modulation and gradient modulation. Biomed. Signal Process. Control
**2019**, 48, 189–196. [Google Scholar] [CrossRef] - Salem, N.; Malik, H.; Shams, A. Medical image enhancement based on histogram algorithms. Procedia Comput. Sci.
**2019**, 163, 300–311. [Google Scholar] [CrossRef] - Joseph, J.; Periyasamy, R. A fully customized enhancement scheme for controlling brightness error and contrast in magnetic resonance images. Biomed. Signal Process. Control
**2018**, 39, 271–283. [Google Scholar] [CrossRef] - Bai, X. Image enhancement through contrast enlargement using the image regions extracted by multiscale top-hat by reconstruction. Optik
**2013**, 124, 4421–4424. [Google Scholar] [CrossRef] - Walpole, R.E.; Myers, R.H.; Myers, S.L.; Cruz, R. Probabilidad y Estadística; McGraw-Hill: Mexico City, Mexico, 1992; Volume 624. [Google Scholar]

Algorithm | Initial Structuring Element (Disk) | Number of Iterations | Contrast Setting Weight |
---|---|---|---|

H | n | $\mathit{\omega}$ | |

MGRTH | 1 | [2–20] | [0.25,0.50,0.75] |

Algorithms | Initial Structuring Element (Disk) | Number of Iterations | Contrast Setting Weight | |
---|---|---|---|---|

H | H′ | n | $\mathit{\omega}$ | |

MGRTH | 1 | - | [2–20] | 0.25 |

GRMMCE [23] | 1 | - | [2–20] | 1 |

MMALCER [39] | 1 | - | [2–20] | 0.5 |

Algorithms | E | REC | PSNR |
---|---|---|---|

I | 6.581 | - | - |

MGRTH | 6.838 | 1.049 | 28.018 |

GRMMCE | 6.782 | 1.054 | 28.109 |

MMALCER | 6.797 | 1.044 | 29.032 |

HE | 6.408 | 1.249 | 12.420 |

BBHE | 6.444 | 1.241 | 17.191 |

MMBEBHE | 6.411 | 1.186 | 20.223 |

QHELC | 6.554 | 1.024 | 38.342 |

Algorithms | Metrics | |||
---|---|---|---|---|

E | REC | PSNR | ||

MGRTH-I | Negative ranks | 33 | - | - |

Positive ranks | 203 | - | - | |

Z | −11.55 | - | - | |

Sig. asymptotic (bilateral) | ≈0 | - | - | |

MGRTH-GRMMCE | Negative ranks | 40 | 160 | 133 |

Positive ranks | 196 | 76 | 103 | |

Z | −10.871 | −6.987 | −1.63 | |

Sig. asymptotic (bilateral) | ≈0 | ≈0 | 0.103 | |

MGRTH-MMALCER | Negative ranks | 32 | 29 | 232 |

Positive ranks | 204 | 207 | 4 | |

Z | −10.188 | −11.524 | −13.267 | |

Sig. asymptotic (bilateral) | ≈0 | ≈0 | ≈0 | |

MGRTH-HE | Negative ranks | 6 | 218 | 4 |

Positive ranks | 230 | 18 | 232 | |

Z | −13.228 | −12.981 | −13.287 | |

Sig. asymptotic (bilateral) | ≈0 | ≈0 | ≈0 | |

MGRTH-BBHE | Negative ranks | 8 | 214 | 15 |

Positive ranks | 228 | 22 | 221 | |

Z | −13.156 | −12.789 | −12.698 | |

Sig. asymptotic (bilateral) | ≈0 | ≈0 | ≈0 | |

MGRTH-MMBEBHE | Negative ranks | 9 | 212 | 30 |

Positive ranks | 227 | 24 | 206 | |

Z | −13.167 | −12.547 | −11.935 | |

Sig. asymptotic (bilateral) | ≈0 | ≈0 | ≈0 | |

MGRTH-QHELC | Negative ranks | 24 | 48 | 232 |

Positive ranks | 212 | 188 | 4 | |

Z | −12.182 | −10.467 | −13.303 | |

Sig. asymptotic (bilateral) | ≈0 | ≈0 | ≈0 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).