Next Article in Journal
Editorial for the Special Issue on “Geometry Reconstruction from Images”
Previous Article in Journal
The Challenge of Single-Photon Emission Computed Tomography Image Segmentation in the Internal Dosimetry of 177Lu Molecular Therapies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Endoscopic Image Enhancement: Wavelet Transform and Guided Filter Decomposition-Based Fusion Approach

1
Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
2
Department of Biochemistry, Microbiology and Immunology, University of Saskatchewan, Saskatoon, SK S7N 5E5, Canada
*
Author to whom correspondence should be addressed.
J. Imaging 2024, 10(1), 28; https://doi.org/10.3390/jimaging10010028
Submission received: 11 December 2023 / Revised: 16 January 2024 / Accepted: 18 January 2024 / Published: 20 January 2024
(This article belongs to the Section Medical Imaging)

Abstract

:
Endoscopies are helpful for examining internal organs, including the gastrointestinal tract. The endoscope device consists of a flexible tube to which a camera and light source are attached. The diagnostic process heavily depends on the quality of the endoscopic images. That is why the visual quality of endoscopic images has a significant effect on patient care, medical decision-making, and the efficiency of endoscopic treatments. In this study, we propose an endoscopic image enhancement technique based on image fusion. Our method aims to improve the visual quality of endoscopic images by first generating multiple sub images from the single input image which are complementary to one another in terms of local and global contrast. Then, each sub layer is subjected to a novel wavelet transform and guided filter-based decomposition technique. To generate the final improved image, appropriate fusion rules are utilized at the end. A set of upper gastrointestinal tract endoscopic images were put to the test in studies to confirm the efficacy of our strategy. Both qualitative and quantitative analyses show that the proposed framework performs better than some of the state-of-the-art algorithms.

1. Introduction

Endoscopy is a nonsurgical medical procedure for inspecting the structure of tissue and lesions of human digestive tracts with high accuracy [1]. Physicians use endoscopy techniques in different parts of the body such as esophagus, stomach, and colon to diagnose gastrointestinal bleeding, inflammatory diseases, and polyps [2]. Endoscopy is performed with a flexible tube that has a LED light source and camera connected to it [3]. On a monitor, the doctor has access to images of the gastrointestinal system. In an upper endoscopy, an endoscope is smoothly inserted through the mouth into the esophagus. Likewise, endoscopes can also go through the rectum into the colon to examine the lower gastrointestinal (GI) tract.
Endoscopic image visual quality is an important aspect in early lesion detection and surgical treatments. This approach, however, has some limitations that may adversely affect the examination and diagnosing process. Inadequate brightness and contrast and blurred details might result from poor camera quality and inconsistent lighting from the single illumination source [4,5]. Furthermore, endoscopic images may sometimes have bright reflections on a mucus layer. This may cause the imaging performance to drastically decline [6]. The situation deteriorates with capsule endoscopy, primarily due to constraints in power and the capsule’s volume [7]. Thus, some image processing techniques must be used to endoscopic images in order to highlight the details and important features for ease of study in clinical settings [8,9].
To enhance the quality of medical images, numerous image enhancement techniques have been proposed. One popular approach is image fusion, which is described as the process of improving an image’s resolution by combining numerous copies of the image with previously recorded data that are notably distinct from one another [10]. In the domains of image processing and computer vision, multi-exposure image fusion is becoming a prominent area of study because it can merge images with different exposure levels into a high-quality full exposure image [11]. From several images with various exposure settings, multi-exposure image fusion seeks to create an image with the most beneficial visual information. These approaches usually called HDR (high dynamic range) techniques which involve capturing multiple images of the same scene at different exposure levels. Typically, HDR techniques include taking at least three photos: one underexposed (capturing details in bright areas), one overexposed (capturing details in dark areas), and one properly exposed. These images are then merged or combined using specialized software or techniques to create a single high-quality image that contains a broader range of tones, colors, and details [12]. Xu et al. presented a new technique for fusing multiple exposure images based on the tensor product and tensor singular value decomposition [13]. Tensor products and t-Svd are used to create a new fusion technique. In [14], the enhanced weighted, guided filtering algorithm is utilized to enhance tissue visualization in endoscopic images. Endoscopic images of vessels were improved by enhancing vessel features and contours using an unsharp mask algorithm and an improved weighted guided filter. Furthermore, Tan et al. suggested an algorithm for improving endoscopic images that decomposes the input image into a detail layer and a base layer based on noise reduction [15]. In the detail layer, the blood vessel data are channel-extended, and in the base layer, adaptive brightness correction is used. Finally, fusion is performed to obtain the improved endoscopic image. Wang and colleagues [16] suggest a technique for enhancing image’s uniformity and luminance while reducing their overexposure. The suggested technique generates an adaptable brightness weighting that can be applied to improve the luminance of the endoscopic image. In a 2018 study, Xia et al. proposed an image-enhancing technique for endoscopic images with effective noise suppression capability [17]. The illumination and detail layers are each treated individually by the algorithm after it has initially identified the various illumination zones.
The endoscopic image enhancement method based on histogram equalization and unsharp masking in the wavelet domain has been reported [18]. It can disclose details of endoscopic images with poor light. The method is a logarithm-based histogram equalization approach that adjusts the low-frequency wavelet components to improve contrast and prevent artifacts.
In this work, our goal is to improve the endoscopic image quality for ease of study in clinical applications. To do so, three image correction methods are used to split source images into several sub images. Finally, the fusion technique aids in the manipulation of image contrast, which improves image visual quality.
The primary contributions of this paper are outlined as follows:
  • We propose an approach to improve the visual quality of endoscopic images by taking advantage of artificially generated sub images and image fusion techniques. We combine three key enhancement methods: detail enhancing, CLAHE, and image brightening.
  • Multi-layer wavelet transform and guided filter-based decomposition schemes, which decompose each intensity layer into four coefficients, have been introduced.
  • A weighted fusion rule based on local contrast and local entropy is proposed to fuse high-frequency components.
The paper is organized as follows: Our algorithm’s design is presented in Section 2 of the paper; the experimental findings are shown in Section 3 along with a discussion of how well the suggested method works in Section 4. Future works is reported in Section 5. Section 6 describes the conclusion.

2. Materials and Methods

This work proposes an endoscopic image enhancement technique based on artificially generated sub-images and fusion schemes. It is worth mentioning that we took advantage of the HIS color space in our work. The HSI (hue, saturation, and intensity) color space, a three-dimensional model that represents colors based on their hue, saturation, and intensity components, has been used. This technique uniquely separates color information from brightness, allowing independent adjustment of color and intensity, which proves beneficial in image processing tasks. This color space’s ability to maintain the original color information while enhancing image features makes it a preferred choice for preserving color fidelity in various applications, aligning well with human visual perception and aiding accurate analysis in fields such as endoscopic imaging [19].
A framework of the proposed model is illustrated in Figure 1. Three image correction methods are used to split source images into several sub-images. Finally, the fusion technique aids in the manipulation of image contrast, which improves the visual quality of the image. This section covers the proposed method’s comprehensive description.

2.1. Generating Sub Images

Limited contrast, limited visibility, low dynamic range, and low signal-to-noise ratio are all characteristics of low-light images. Additionally, the true color of the target cannot be captured because the entire image is underexposed [20]. By first creating three sub images with different characteristics, we attempted to begin the image enhancing process. Sub images are different versions of the original input image which are generated using three image enhancement methods.
Among all multi exposure image fusion methods that have been developed in recent years [11,21], a common technique is to use gamma correction to create the multi exposure derived images as the generated sub images.
Gamma correction is a nonlinear operation on the input image that results in an exponential relationship between the gray values of the output image and the input image [22]. In other words, Gamma correction is used to modify the overall image intensity.
Gamma correction alters the overall image intensity by transforming the power function indicated as Ɣ. As can be seen in Figure 2, when Ɣ < 1, brighter intensities are compressed and the details in highlights are highlighted while Ɣ > 1 highlights the details in shadows.
That is why researchers show interest in low-light image improvement using gamma corrections and adjusting the reflected light on the object surface [23]. However, Gamma corrections may cause some problems as well. For example, as the light increases, some underexposed areas become visible, but areas that were previously well-exposed/overexposed deteriorate because of global exposure adjustments [20]. To solve this issue, we perform three image enhancement methods on the original image to generate three different versions of our input image. By utilizing these methods, we aim to improve the contrast and enhance all the details in the dark and bright regions. This is mainly performed to have an even illumination at the end of the enhancement process.
To improve quality, we tried to generate three sub images which are complementary to one another. We used detail enhancement, contrast-limited adaptive histogram equalization algorithm (CLAHE) and the brightened image to generate sub images from a single input image. These three sub images are illustrated in Figure 3. We have also included the histogram demonstration of these sub images to help compare the general contrast and pixel distribution of the images.
To perform adaptive histogram equalization, CLAHE is used to generate the first sub image. CLAHE is based on breaking down the image into several almost equal-sized, non-overlapping areas and performing histogram equalization on each patch [24]. This algorithm improved the local contrast of bright spots. To enhance the features of the dark areas, we have used image brightening to improve the contrast in darker areas and generally enhance the image’s contrast as our second sub image for this work. This is mainly performed based on an objective function which consists of image entropy [25,26,27]. The third sub image is retrieved using local Laplacian filtering. It uses straightforward processing to alter the image in an edge-aware manner [28].
Unlike HDR techniques, our approach does not rely on capturing multiple exposures of the same scene; instead, it works with the single input image using a combination of techniques.

2.2. Image Decomposition Based on Multi Level Wavelet Transform and Guided Image Filtering (MLWTGF)

The source image is divided into multiple sub images. The following step is to decompose these three images into explanatory layers. One mathematical technique that has gained growing prominence for efficiently extracting image’s information is the wavelet transform [29]. By applying image decomposition based on wavelet transform theory, it is possible to extract an image’s information relating to the horizontal, vertical, and diagonal directions. The coefficients resulted from the wavelet transform are LL, LH, HL, and HH. The source image’s approximation coefficient is represented by LL while others are detail coefficients [30].
We then use the coefficients as the guidance image for guided filter to enhance the edges and structural information. The block diagram of the proposed decomposition scheme is demonstrated in Figure 4. The intensity layer of each input image is enhanced through guided filter.
An example output image of the guided filter using our proposed decomposition scheme is shown in Figure 5. We have used the detail coefficients as the guidance image. The detail coefficient is resulted from the wavelet transform. The goal is to efficiently transfer the structure details to the resulted filtered image. It can be seen in the intensity layer and filtered sub images that significant horizontal, vertical, and diagonal features are effectively transferred from the corresponding guidance filter (cHn).

2.3. Image Fusion

Employing the above-mentioned decomposition approach, the sub layer containing rich structural details (LH, HL, and HH) and background information (LL) are generated. The proper fusion rules should be applied on the captured components from three input images. Based on the component’s characteristics, fusion strategy should be selected. Most of the approximation information (the background) from the input images is presented in the LL components, which is captured from the low frequency layers. Thus, the maximum-value fusion approach is applied to make sure that more texture-related features are preserved (Equation (1)).
A F u s e d = M a x ( A 1 , A 2 , A 3 )
Detail components contain the edge, corner, and structure information of the input source images. A weighted fusion rule is chosen to fuse high frequency components. In weighted fusion methods, the coefficients of different local areas are given varying weights [31]. Weights denoting the relative significance of each combined image.
The choice of weight is fundamental since it directly affects the fused image. Selecting an unsuitable weight will result in unstable algorithm performance [32]. We have considered two parameters for weighing function: local contrast and local entropy. In a 3 × 3 neighborhood, local contrast calculations will be made between the centered cell and the surrounding cells to determine the local contrast information [33]. In other words, local contrast measures that the pixel is variable form the surrounding pixels. On the other hand, local entropy is a metric for information density [34]. The input image’s texture can be described using entropy, a statistical indicator of randomness [35].
For each 3 × 3 neighborhood in the fusion input images, we obtained the local contrast and local entropy. The regional characteristics provide a quantitative analysis of pixel intensity swings in an image. At this point, we allocate weights to the fusion’s input images. In general, a larger weight should be given to the patch with more details and better contrast. The weights are assigned based on prioritized local contrast and local entropy. We can control the trade-off between contrast and entropy by modifying the weighting parameters, resulting in the fused image having the desired level of detail preservation and contrast enhancement. To prioritize detail preservation, we have given higher weights to local entropy.
The weighing criteria for fusing two detail components based on local contrast and local entropy are formulated as follows:
W I A = γ 1 . C A + γ 2 . E A
W I B = γ 1 . C B + γ 2 . E B
where γ_1 and γ_2 indicate the weighting parameters. The fused sub image is obtained by the weighted fusion approach:
I F u s e d = i n ( W I A I A + W I B I B )
After extracting the four fused components, we perform the inverse wavelet transform to generate the final enhanced image.

3. Results

We examined our architecture using a readily available endoscopic image collection of the gastrointestinal tract. The open-access Kvasir dataset contains images of the GI tract that highlight anatomical landmarks and pathological findings [36].
We evaluated how effective our suggested framework performed in this section. Our methodology has been compared to four other image enhancement techniques. Comparison strategies include enhancing method for weakly illuminated images [37], endoscopic image luminance enhancement [16], enhancement method for correcting low-illumination images [38], and LIME [39]. All these papers use the same approach to use different sub images of the input image.
All other related enhanced images of four different approaches are developed by publicly available codes. All the experiments are run in MATLAB (R2023a) on an 11th Gen Intel(R) Core(TM) i7, 3.00 GHz and 16.0 GB RAM computer.
To assess the method’s efficiency, we conduct subjective and objective assessments in our experiments. Furthermore, to evaluate how applicable our method is, we have designed a scoring system. The doctors were asked to grade the images on a scale of 1 to 5 (1: Poor/2: Average/3: Good/4: Very good/5: Excellent).

3.1. Qualitative Analysis

Physicians mostly use endoscopic images to analyze and interpret images of artery walls and organ tissues gathered from patients [15]. That is why visual comparison of improved images is essential. This section reports the image enhancement results when compared to other methods. In Figure 6, the input image demonstrates the Z line between the esophagus and the stomach. We have tried to enhance the input image’s visualization with 5 different methods.
As can be seen, there appears to be lack of contrast in Figure 6b–d and an improvement in the brightness and clarity in general, but some information is lost especially in brighter areas. It can be verified that our suggested strategy is more effective than the previous publications in terms of improving visual quality and highlighting details. The proposed enhancing strategy improves image contrast in the normal brightness area, while the details are highlighted in the dark section as well. Also, the output images show no signs of noise, over enhancing or color distortion. This demonstrates that our recommended algorithm is appropriate for low-light image enhancing applications.
In Figure 7, the input image contains a polyp and blood vessels. The enhanced image must improve the general contrast while emphasizing the vessels details to fit the observer’s normal perceptive spectrum. The outputs in Figure 7b,e clearly have a better demonstration in darker areas. On the other hand, over enhancement happened with Figure 7c,d. The brightness of lighter regions is improved in a way that blood vessel information is lost. In Figure 7f, our proposed method’s output increased the visibility in darker areas and enhanced details in all regions.
Figure 8 also illustrates improved image visualization in Figure 8f with enhanced detail and overall contrast.
In order to enhance our evaluation, we contacted two skilled medical professionals who regularly perform endoscopy procedures. The physicians were given a collection of images, including those produced by our proposed method as well as algorithms from other researchers. The doctors were asked to grade the images on a scale of 1 to 5 (1: Poor/2: Average/3: Good/4: Very good/5: Excellent). The average ratings given by human observers are related to ten test images and are shown in Table 1. All output images are provided in Supplementary File. Our method’s outputs gain higher scores in comparison with the other four methods. In general, our suggested enhancement strategy improved the visual contrast and earned a favorable subjective evaluation by the professional observers. This is consistent with the claim that our provided enhancing algorithm can improve the general contrast and enhancing the details.

3.2. Quantitative Analysis

In the following section, we will compare the effects of the proposed strategy to existing ways using evaluation metrics. There are two primary methods for providing an objective evaluation of an image enhancement approach: First is the full-referenced image quality metric which considers information from both the modified image and a reference image. The second is no-referenced image quality metrics. These indexes attempt to estimate perceptual quality only based on the output image [40]. However, due to lack of a perfect reference image, it is a challenging task for many computer vision scenarios [41]. To illustrate the effectiveness of our method, six indexes have been selected from both categories.
Entropy: determines the fused image’s texture information.
CII (contrast improvement index): measures the extent of enhancement of contrast before and after image processing [5].
PIQE (perception-based image quality evaluator): the no-reference image quality metric, which has an inverse relationship with an image’s perceived quality [42].
PCQI (patch-based contrast quality): estimates the image’s overall contrast quality while simultaneously constructing a quality map that has the local changes [43].
PSNR (peak signal-to-noise ratio): a byte-by-byte comparison of the two images without considering what they actually represent, hence it can only approximate the image quality as perceived by human observers [44]. The difference between the image before and after processing is reflected in the PSNR. The difference becomes smaller as the PSNR value increases.
SSIM (structural similarity index): with SSIM, two image’s similarity can be calculated based on brightness and the contrast [45].
In this section, the tables display the results of an objective evaluation of ten images that were enhanced using various techniques. The top two results are highlighted in bold. Table 2 reports that our suggested algorithm and [39] have relatively higher entropy than other methods. This confirms that these two methods can enhance visual contrasts and provide information about the distribution of pixel intensities. In Table 3, we have compared the CII to measure the extent of enhancement of contrast before and after image processing. As can be seen our proposed method shows significant contrast improvement that can compensate the effects of poor camera quality and inconsistent lighting from the single illumination source.
Table 4 represents another no-reference image quality metric PIQE, which places an emphasis on perceptual quality evaluation. Our method and [37] have the top two results among the 10 test images. This supports these two method’s abilities to produce improved visual experiences. Also, PCQI, a strong patch-based index, demonstrating the method’s capability for perceptually transforming the image’s information is demonstrated in Table 5.
We have also reported the comparison results for two full-reference image quality measurements. For Table 6 and Table 7, we have considered the input image as the reference image. However, this may not be the best effort to evaluate the enhancement efficiency, but it is a common practice since reference image is not available. The outputs generated with our method have higher PSNR which means better visuality of the reconstructed image. SSIM is also presented as the original image and the enhanced images similarity based on brightness and contrast. Methods [16,37] and ours have relatively better SSIM values.

4. Discussion

To summarize, in this section, we have reported the comparison results between our proposed methods and other image enhancing approaches. While other image enhancement methods have shown promising results in image enhancement techniques, there are still some limitations in terms of local contrast, detail preservation, and applicability for medical practitioners. To address these issues, this article suggests an alternative approach which consists of three sections: The first part is image decomposition based on wavelet transform and guided filter that decomposes the input image while maintaining the details of the input image.
Second is image fusion that combines different characteristics of image’s sub layers and finally the image reconstruction that includes inverse wavelet transform. Figure 9 reports the average value a specific metric of 10 images to have a better understanding of the results. The outputs generated with our method have relatively better performance. Overall, the findings demonstrate that our suggested methodology performs better than the other papers. The suggested method has an acceptable enhancement effect that raises the brightness of dark objects, improving the clarity and color, and making the images more congruent with human vision which is advantageous to the diagnosing procedure.
It is worth mentioning that the inherent subjectivity in the process of image enhancement should be acknowledged. Factors such as endoscopy’s device illumination and imaging technology play important roles in the original endoscopic image’s quality. We recognize that the interpretation of ‘best images’ can be subjective and influenced by individual expertise. However, we tried to report a detailed description of our work. The paper’s focus is on increasing the visual quality of endoscopic images by taking advantage of artificially generated sub images using three key well known enhancement methods and performing image fusion techniques. Our suggested method consists of three main stages that have been explained with detailed description that facilitates the reproducibility of our results, aiming to enhance the applicability of our method across different clinical settings.

5. Future Work

As a future work, we can utilize the improved images generated by our algorithm for the detection and segmentation of various abnormalities, such as polyps, in gastrointestinal (GI) tract endoscopic images. Since medical images often suffer from low contrast and blurred details, it is always challenging to distinguish between different structures. Techniques like ours can perform image enhancement as a preprocessing step by tackling common problems related to endoscopic images. Preprocessing is a pivotal step in medical image processing applications, such as image segmentation and classification. For example, segmentation techniques seek to precisely distinguish the border of the polyp from the surrounding tissue in addition to detecting polyps. Monitoring the resulting polyp segmentations validates that the least favorable segmentation outcomes are linked to lower quality input images or relatively harder-to-identify polyps [46,47]. In such cases, our reported algorithm can ensure that the input raw data are optimized for subsequent analysis. It may lead to more accurate and reliable results in the identification of regions of interest or abnormalities.

6. Conclusions

In this study, we introduce a method for enhancing endoscopic images. The first step is to generate three derived sub images from the single input image which are complementary to one another in terms of local and global contrast. By utilizing CLAHE, image brightening, and detail enhancing methods, we tried to generate complementary sub images. We then used a novel multi-level wavelet transform and guided filter-based decomposition technique to decompose each sub layer. The necessary weighted fusion rules are then applied at the end to produce the final improved image. The suggested technique increases the brightness of dark objects while enhancing their clarity and color, which is an acceptable enhancement effect. The proposed enhancing strategy improves image’s contrast in the normal brightness area, while the details are highlighted in the dark section as well. Also, the output images show no signs of noise, over enhancing, or color distortion. This demonstrates that our proposed strategy is appropriate for low-light image enhancing applications.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jimaging10010028/s1. Figure S1. Comparison of enhanced images from Kvasir dataset; Figure S2. Comparison of enhanced images from Kvasir dataset; Figure S3. Comparison of enhanced images from Kvasir dataset; Figure S4. Comparison of enhanced images from Kvasir dataset; Figure S5. Comparison of enhanced images from Kvasir dataset; Figure S6. Comparison of enhanced images from Kvasir dataset; Figure S7. Comparison of enhanced images from Kvasir dataset; Figure S8. Comparison of enhanced images from Kvasir dataset; Figure S9. Comparison of enhanced images from Kvasir dataset; Figure S10. Comparison of enhanced images from Kvasir dataset.

Author Contributions

Conceptualization, S.M.; methodology, S.M.; software, S.M. and O.Y.; validation, K.A.W.; formal analysis, S.M. and O.Y.; investigation, S.M. and O.Y.; resources, S.M.; data curation, S.M.; writing—original draft preparation, S.M.; writing—review and editing, O.Y., K.A.W. and K.E.L.; visualization, S.M.; supervision, K.A.W. and K.E.L.; project administration, S.M.; funding acquisition, K.A.W. All authors have read and agreed to the published version of the manuscript.

Funding

The New Frontiers in Research Fund Exploration (NFRFE).

Institutional Review Board Statement

We were not required to complete an ethical assessment prior to conducting this research.

Informed Consent Statement

We were not required to complete an ethical assessment prior to conducting this research.

Data Availability Statement

Acknowledgments

We are grateful to Mina Niazi from Royal University hospital and Danial Shakiba from Kermanshah University of Medical Sciences. They provided constructive support for our work.

Conflicts of Interest

The authors state that they have no conflicts of interest.

References

  1. Zheng, L.; Zheng, X.; Mu, Y.; Zhang, M.; Liu, G. Color-guided deformable convolution network for intestinal metaplasia severity classification using endoscopic images. Phys. Med. Biol. 2023, 68, 18. [Google Scholar]
  2. Liedlgruber, M.; Uhl, A. Computer-aided decision support systems for endoscopy in the gastrointestinal tract: A review. IEEE Rev. Biomed. Eng. 2011, 4, 73–88. [Google Scholar] [CrossRef] [PubMed]
  3. Chakravarthy, S.; Balakuntala, M.V.; Rao, A.M.; Thakur, R.K.; Ananthasuresh, G. Development of an integrated haptic system for simulating upper gastrointestinal endoscopy. Mechatronics 2018, 56, 115–131. [Google Scholar] [CrossRef]
  4. Huang, D.; Liu, J.; Zhou, S.; Tang, W. Deep unsupervised endoscopic image enhancement based on multi-image fusion. Comput. Methods Programs Biomed. 2022, 221, 106800. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, J.; Han, M.; Dai, Y. Three-dimensional porous structure reconstruction for low-resolution monocular endoscopic images. Opt. Precis. Eng. 2020, 28, 2085–2095. [Google Scholar] [CrossRef]
  6. Chong, Z.; Liu, Y.; Wang, K.; Tian, J. Specular highlight removal for endoscopic images using partial attention network. Phys. Med. Biol. 2023, 68, 225009. [Google Scholar]
  7. Ahmed, M.; Farup, I.; Pedersen, M.; Hovde, Ø.; Yildirim Yayilgan, S. Stochastic capsule endoscopy image enhancement. J. Imaging 2018, 4, 75. [Google Scholar]
  8. Ezatian, R.; Khaledyan, D.; Jafari, K.; Heidari, M.; Khuzani, A.Z.; Mashhadi, N. Image quality enhancement in wireless capsule endoscopy with adaptive fraction gamma transformation and unsharp masking filter. In Proceedings of the IEEE Global Humanitarian Technology Conference (GHTC), Seattle, WA, USA, 29 October–1 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–7. [Google Scholar]
  9. Long, M.; Li, Z.; Xie, X.; Li, G.; Wang, Z. Adaptive image enhancement based on guide image and fraction-power transformation for wireless capsule endoscopy. IEEE Trans. Biomed. Circuits Syst. 2018, 12, 993–1003. [Google Scholar] [CrossRef]
  10. Choudhary, G.; Sethi, D. Mathematical modeling and simulation of multi-focus image fusion techniques using the effect of image enhancement criteria: A systematic review and performance evaluation. Artif. Intell. Rev. 2023, 56, 13787–13839. [Google Scholar] [CrossRef]
  11. Xu, F.; Liu, J.; Song, Y.; Sun, H.; Wang, X. Multi-exposure image fusion techniques: A comprehensive review. Remote Sens. 2022, 14, 771. [Google Scholar] [CrossRef]
  12. McCann, J.; Rizzi, A. The Art and Science of HDR Imaging; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  13. Xu, K.; Wang, Q.; Xiao, H.; Liu, K. Multi-Exposure Image Fusion Algorithm Based on Improved Weight Function. Front. Neurorobot. 2022, 16, 846580. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, G.; Lin, J.; Cao, E.; Pang, Y.; Sun, W. A medical endoscope image enhancement method based on improved weighted guided filtering. Mathematics 2022, 10, 1423. [Google Scholar] [CrossRef]
  15. Tan, W.; Xu, C.; Lei, F.; Fang, Q.; An, Z.; Wang, D.; Han, J.; Qian, K.; Feng, B. An endoscope image enhancement algorithm based on image decomposition. Electronics 2022, 11, 1909. [Google Scholar] [CrossRef]
  16. Wang, L.; Wu, B.; Wang, X.; Zhu, Q.; Xu, K. Endoscopic image luminance enhancement based on the inverse square law for illuminance and retinex. Int. J. Med. Robot. Comput. Assist. Surg. 2022, 18, e2396. [Google Scholar] [CrossRef]
  17. Xia, W.; Chen, E.; Peters, T. Endoscopic image enhancement with noise suppression. Healthc. Technol. Lett. 2018, 5, 154–157. [Google Scholar] [CrossRef]
  18. Long, M.; Xie, X.; Li, G.; Wang, Z. Wireless capsule endoscopic image enhancement method based on histogram correction and unsharp masking in wavelet domain. In Proceedings of the 2019 17th IEEE International New Circuits and Systems Conference (NEWCAS), Munich, Germany, 23–26 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  19. Li, C.; Tang, S.; Yan, J.; Zhou, T. Low-light image enhancement via pair of complementary gamma functions by fusion. IEEE Access 2020, 8, 169887–169896. [Google Scholar] [CrossRef]
  20. Feng, X.; Li, J.; Hua, Z.; Zhang, F. Low-light image enhancement based on multi-illumination estimation. Appl. Intell. 2021, 51, 5111–5131. [Google Scholar] [CrossRef]
  21. Qu, L.; Liu, S.; Wang, M.; Song, Z. Rethinking multi-exposure image fusion with extreme and diverse exposure levels: A robust framework based on Fourier transform and contrastive learning. Information Fusion 2023, 92, 389–403. [Google Scholar] [CrossRef]
  22. Li, F.; Gang, R.; Li, C.; Li, J.; Ma, S.; Liu, C.; Cao, Y. Gamma-enhanced spatial attention network for efficient high dynamic range imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 1032–1040. [Google Scholar]
  23. Maurya, L.; Lohchab, V.; Mahapatra, K.; Abonyi, J. Contrast and brightness balance in image enhancement using Cuckoo Search-optimized image fusion. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 7247–7258. [Google Scholar] [CrossRef]
  24. Pizer, S.; Amburn, E.; Austin, J.; Cromartie, R. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  25. Dong, X.; Pang, Y.; Wen, J. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the IEEE® International Conference on Multimedia and Expo (ICME), Barcelona, Spain, 11–15 July 2011; pp. 1–6. [Google Scholar]
  26. Kaiming, H. Single Image Haze Removal Using Dark Channel Prior. Ph.D. Thesis, The Chinese University of Hong Kong, Hong Kong, 2011. [Google Scholar]
  27. Park, D.; Park, H.; Han, D.K.; Ko, H. Single Image Dehazing with Image Entropy and Information Fidelity. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014. [Google Scholar]
  28. Paris, S.; Hasinoff, S.W.; Kautz, J. Local Laplacian filters: Edge-aware image processing with a Laplacian pyramid. ACM Trans. Graph. 2011, 30, 68. [Google Scholar] [CrossRef]
  29. Gao, J.; Wang, B.; Wang, Z.; Wang, Y.; Kong, F. A wavelet transform-based image segmentation method. Optik 2020, 208, 164123. [Google Scholar] [CrossRef]
  30. Aymaz, S.; Köse, C. A novel image decomposition-based hybrid technique with super-resolution method for multi-focus image fusion. Inf. Fusion 2019, 45, 113–127. [Google Scholar] [CrossRef]
  31. Wang, K.; Zheng, M.; Wei, H.; Qi, G.; Li, Y. Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors 2020, 20, 2169. [Google Scholar] [CrossRef] [PubMed]
  32. Liu, H.; Fang, S.; Jianhua, J. An improved weighted fusion algorithm of multi-sensor. J. Phys. Conf. Ser. 2020, 1453, 012009. [Google Scholar] [CrossRef]
  33. Han, J.; Moradi, S.; Faramarzi, I.; Liu, C.; Zhang, H.; Zhao, Q. A local contrast method for infrared small-target detection utilizing a tri-layer window. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1822–1826. [Google Scholar] [CrossRef]
  34. Jiang, X.; Wu, X.; Xiong, Y.; Li, B. Active contours driven by local and global intensity fitting energies based on local entropy. Optik 2015, 126, 5672–5677. [Google Scholar] [CrossRef]
  35. Gonzalez, R.C. Digital Image Processing; Pearson Education India: Noida, India, 2009. [Google Scholar]
  36. Pogorelov, K.; Randel, K.R.; Griwodz, C.; Eskeland, S.L.; de Lange, T.; Johansen, D.; Spampinat, C. Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 164–169. [Google Scholar]
  37. Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
  38. Wang, W.; Chen, Z.; Yuan, X.; Wu, X. Adaptive image enhancement method for correcting low-illumination images. Inf. Sci. 2019, 496, 25–41. [Google Scholar] [CrossRef]
  39. Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
  40. Varga, D. No-reference image quality assessment with global statistical features. J. Imaging 2021, 7, 29. [Google Scholar] [CrossRef] [PubMed]
  41. Golestaneh, S.A.; Dadsetan, S.; Kitani, K.M. No-reference image quality assessment via transformers, relative ranking, and self-consistency. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 1220–1230. [Google Scholar]
  42. Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the National conference on communications (NCC), Mumbai, India, 27 February–1 March 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  43. Ma, K.; Zeng, K.; Wang, Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef] [PubMed]
  44. Winkler, S.; Mohandas, P. The evolution of video quality measurement: From PSNR to hybrid metrics. IEEE Trans. Broadcast. 2008, 54, 660–668. [Google Scholar] [CrossRef]
  45. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  46. Yue, G.; Han, W.; Jiang, B.; Zhou, T.; Cong, R.; Wang, T. Boundary constraint network with cross layer feature integration for polyp segmentation. IEEE J. Biomed. Health Inform. 2022, 26, 4090–4099. [Google Scholar] [CrossRef]
  47. Yeung, M.; Sala, E.; Schönlieb, C.B.; Rundo, L. Focus U-Net: A novel dual attention-gated CNN for polyp segmentation during colonoscopy. Comput. Biol. Med. 2021, 137, 104815. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed image enhancing model.
Figure 1. Framework of the proposed image enhancing model.
Jimaging 10 00028 g001
Figure 2. Three generated sub images by gamma correction.
Figure 2. Three generated sub images by gamma correction.
Jimaging 10 00028 g002
Figure 3. (a) The intensity layer of three generated sub images by CLAHE [24], brightened image [25,26,27] and detail enhanced image [28]. (b) Their corresponding histogram.
Figure 3. (a) The intensity layer of three generated sub images by CLAHE [24], brightened image [25,26,27] and detail enhanced image [28]. (b) Their corresponding histogram.
Jimaging 10 00028 g003
Figure 4. Block diagram of the proposed decomposition scheme.
Figure 4. Block diagram of the proposed decomposition scheme.
Jimaging 10 00028 g004
Figure 5. (a) Intensity layer as the input image. (b) Horizontal detail coefficient as the guidance image. (c) Guided filter’s output.
Figure 5. (a) Intensity layer as the input image. (b) Horizontal detail coefficient as the guidance image. (c) Guided filter’s output.
Jimaging 10 00028 g005
Figure 6. Comparison of enhanced images from Kvasir dataset. (a) The input image; (b) [37]; (c) [16]; (d) [38]; (e) [39]; (f) proposed.
Figure 6. Comparison of enhanced images from Kvasir dataset. (a) The input image; (b) [37]; (c) [16]; (d) [38]; (e) [39]; (f) proposed.
Jimaging 10 00028 g006
Figure 7. Comparison of enhanced images from Kvasir dataset: (a) the input image; (b) [37]; (c) [16]; (d) [38]; (e) [39]; (f) proposed.
Figure 7. Comparison of enhanced images from Kvasir dataset: (a) the input image; (b) [37]; (c) [16]; (d) [38]; (e) [39]; (f) proposed.
Jimaging 10 00028 g007
Figure 8. Comparison of enhanced images from Kvasir dataset: (a) the input image; (b) [37]; (c) [16]; (d) [38]; (e) [39]; (f) proposed.
Figure 8. Comparison of enhanced images from Kvasir dataset: (a) the input image; (b) [37]; (c) [16]; (d) [38]; (e) [39]; (f) proposed.
Jimaging 10 00028 g008
Figure 9. Average comparison chart. Each bar corresponds to the average value a specific metric of 10 images: (a) [37], (b) [16], (c) [38], (d) [39], (e) Proposed.
Figure 9. Average comparison chart. Each bar corresponds to the average value a specific metric of 10 images: (a) [37], (b) [16], (c) [38], (d) [39], (e) Proposed.
Jimaging 10 00028 g009
Table 1. The average ratings given by observers.
Table 1. The average ratings given by observers.
Input Image[37][16][38][39]Proposed
Image 1233.54.54.5
Image 22.534.54.55
Image 3233.553.5
Image 42.53444.5
Image 523.5444.5
Image 62.5333.54.5
Image 722.534.54.5
Image 82.53344.5
Image 922344.5
Image 102.52.52.544.5
Table 2. The entropy outcomes from various methods.
Table 2. The entropy outcomes from various methods.
Input Image[37][16][38][39]Proposed
Image 17.18337.44107.48257.70527.8653
Image 27.34077.65907.54017.62337.7428
Image 37.18787.65347.40417.48187.7089
Image 46.92657.18166.98517.28387.7392
Image 57.07417.60597.49647.62357.7031
Image 67.62427.59857.76067.73777.7544
Image 77.28647.07927.26587.54307.5234
Image 87.27886.82537.47157.72037.7822
Image 96.90226.94317.10457.37717.6132
Image 107.16457.10927.34517.66297.6463
Table 3. The CII outcomes from various methods.
Table 3. The CII outcomes from various methods.
Input Image[37][16][38][39]Proposed
Image 10.98782.03995.42684.22336.9864
Image 20.85843.04974.35745.60604.0511
Image 30.68565.303210.60648.602893.2836
Image 40.93265.17785.02018.059910.0784
Image 50.59902.51444.78734.99905.1079
Image 60.83872.40991.46051.94315.0698
Image 70.97271.48673.99712.70277.9269
Image 80.94432.78982.50042.79005.7693
Image 90.766432.453510.8578.55604.3330
Image 100.89122.77513.76444.660911.8316
Table 4. The PIQE outcomes from various methods.
Table 4. The PIQE outcomes from various methods.
Input Image[37][16][38][39]Proposed
Image 142.115025.232440.878651.264437.4767
Image 219.474517.262718.966224.290029.8537
Image 314.902111.748213.753117.545116.3434
Image 414.205924.782115.396814.699831.4279
Image 513.693226.768215.956315.525518.3837
Image 619.802829.035623.917724.037018.5096
Image 730.236239.168929.703434.506825.7705
Image 820.190734.406524.921125.789624.0150
Image 96.938920.17368.186118.109819.7488
Image 1017.927625.398716.711225.894224.7609
Table 5. The PCQI outcomes from various methods.
Table 5. The PCQI outcomes from various methods.
Input Image[37][16][38][39]Proposed
Image 10.99450.99920.99970.99940.9991
Image 20.99430.99880.99960.99890.9985
Image 30.99480.99900.99910.99960.9990
Image 40.99530.99930.99880.99940.9996
Image 50.99540.99920.99940.99970.9986
Image 60.99360.99810.99880.99860.9987
Image 70.99500.99830.99860.99920.9989
Image 80.99460.99890.99940.99930.9993
Image 90.99530.99910.99940.99980.9989
Image 100.99580.99870.99480.99730.9994
Table 6. The PSNR outcomes from various methods.
Table 6. The PSNR outcomes from various methods.
Input Image[37][16][38][39]Proposed
Image 122.2718.9719.9424.0622.78
Image 220.1917.4719.7519.2223.72
Image 319.5117.7320.0323.7224.00
Image 426.8418.0917.6826.0622.15
Image 519.1719.6322.4923.0525.44
Image 617.4114.0918.2217.7522.59
Image 719.2914.2716.4421.7519.87
Image 821.1117.4421.5421.8623.12
Image 922.8219.5623.0427.6025.19
Image 1020.9821.3619.2125.0921.79
Table 7. The SSIM outcomes from various methods.
Table 7. The SSIM outcomes from various methods.
Input Image[37][16][38][39]Proposed
Image 10.98090.95410.94790.92710.9848
Image 20.92580.93270.92190.88030.9083
Image 30.93040.94310.93290.92810.9136
Image 40.98480.95110.93980.94020.9072
Image 50.93480.96630.95610.90360.9391
Image 60.84030.79020.83590.80100.8967
Image 70.90710.86480.85520.87090.9032
Image 80.95220.91550.93580.87300.9042
Image 90.95080.96050.95400.96030.9525
Image 100.94890.97320.94180.93190.9273
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moghtaderi, S.; Yaghoobian, O.; Wahid, K.A.; Lukong, K.E. Endoscopic Image Enhancement: Wavelet Transform and Guided Filter Decomposition-Based Fusion Approach. J. Imaging 2024, 10, 28. https://doi.org/10.3390/jimaging10010028

AMA Style

Moghtaderi S, Yaghoobian O, Wahid KA, Lukong KE. Endoscopic Image Enhancement: Wavelet Transform and Guided Filter Decomposition-Based Fusion Approach. Journal of Imaging. 2024; 10(1):28. https://doi.org/10.3390/jimaging10010028

Chicago/Turabian Style

Moghtaderi, Shiva, Omid Yaghoobian, Khan A. Wahid, and Kiven Erique Lukong. 2024. "Endoscopic Image Enhancement: Wavelet Transform and Guided Filter Decomposition-Based Fusion Approach" Journal of Imaging 10, no. 1: 28. https://doi.org/10.3390/jimaging10010028

APA Style

Moghtaderi, S., Yaghoobian, O., Wahid, K. A., & Lukong, K. E. (2024). Endoscopic Image Enhancement: Wavelet Transform and Guided Filter Decomposition-Based Fusion Approach. Journal of Imaging, 10(1), 28. https://doi.org/10.3390/jimaging10010028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop