A Medical Endoscope Image Enhancement Method Based on Improved Weighted Guided Filtering

: In clinical surgery, the quality of endoscopic images is degraded by noise. Blood, illumi-nation changes, specular reﬂection, smoke, and other factors contribute to noise, which reduces the quality of an image in an occluded area, affects doctors’ judgment, prolongs the operation duration, and increases the operation risk. In this study, we proposed an improved weighted guided ﬁltering algorithm to enhance endoscopic image tissue. An unsharp mask algorithm and an improved weighted guided ﬁlter were used to enhance vessel details and contours in endoscopic images. The scheme of the entire endoscopic image processing, which included detail enhancement, contrast enhancement, brightness enhancement, and highlight area removal, is presented. Compared with other algorithms, the proposed algorithm maintained edges and reduced halos efﬁciently, and its effectiveness was demonstrated using experiments. The peak signal-to-noise ratio and structural similarity of endoscopic images obtained using the proposed algorithm were the highest. The foreground–background detail variance–background variance improved. The proposed algorithm had a strong ability to suppress noise and could maintain the structure of original endoscopic images, which improved the details of tissue blood vessels. The ﬁndings of this study can provide guidelines for developing endoscopy devices.


Introduction
With the development of science and technology, endoscopy has become a widely employed medical procedure [1]. Through endoscopy, doctors not only directly observe the tissue morphology and lesions of human internal organs but also further process endoscopic images to achieve excellent visual and diagnostic effects. However, owing to the particularity of human tissues and the limitation of imaging conditions, images obtained directly via endoscopy often have low contrast between blood vessels and surrounding tissues; thus, some vascular features are missing. Therefore, it is necessary to enhance endoscopic images [2][3][4].
Numerous image enhancement (IE) algorithms have been proposed to improve the quality of endoscope images. Retinal vascular enhancement in medical images [5] is achieved using image processing methods, such as the adaptive histogram equalization method, unsharpened mask algorithm [6], morphological method, Hessian matrix

Guided Filtering
Guided image filtering (GIF) is a filtering algorithm proposed by He Kai-ming [22] and can be used for stereo matching, IE, image fusion, defogging, etc. To the best of our knowledge, GIF is the fastest edge-preserving algorithm and its complexity is O (N). The following linear relationship exists between the leading image G and the output image Q: where k w denotes the filter window and k a and k b are constants in the window.
As the gradient between the texture of the output image and that of the guide image is the same, the equation Q a G ∇ = ∇ is satisfied. Therefore, guided image filtering (GIF) can preserve edges to a certain extent. GIF's energy function is defined as follows: where λ denotes the regularization parameter, whose function is to prevent k a from becoming too large; P represents the input image. Equation (3) is solved using the leastsquares method as follows:

Weighted Guided Filtering
However, the regularization coefficient is fixed in the energy function of GIF and the effect of sharpening the prominent edge while denoising is not good. The λ in GIF is artificial and the differences in image textures between various windows are ignored. Therefore, some researchers have proposed a weighted GIF (WGIF) [23]. Thus, an edge weight factor is introduced in WGIF to adaptively adjust the regularization parameters as follows: ( ) 2 ,1 2 2 1 ,1

Guided Filtering
Guided image filtering (GIF) is a filtering algorithm proposed by He Kai-ming [22] and can be used for stereo matching, IE, image fusion, defogging, etc. To the best of our knowledge, GIF is the fastest edge-preserving algorithm and its complexity is O (N). The following linear relationship exists between the leading image G and the output image Q: where w k denotes the filter window and a k and b k are constants in the window. As the gradient between the texture of the output image and that of the guide image is the same, the equation ∇Q = a∇G is satisfied. Therefore, guided image filtering (GIF) can preserve edges to a certain extent. GIF's energy function is defined as follows: where λ denotes the regularization parameter, whose function is to prevent a k from becoming too large; P represents the input image. Equation (3) is solved using the least-squares method as follows:

Weighted Guided Filtering
However, the regularization coefficient is fixed in the energy function of GIF and the effect of sharpening the prominent edge while denoising is not good. The λ in GIF is artificial and the differences in image textures between various windows are ignored. Therefore, some researchers have proposed a weighted GIF (WGIF) [23]. Thus, an edge weight factor is introduced in WGIF to adaptively adjust the regularization parameters as follows: where i denotes the center pixel of the current window and σ 2 G,1 (i) denotes the variance of the guiding image G in pixel i in the window. Currently, the window size is 3 × 3, and N denotes the pixel size of the image. In this study, ε is (0.001 × L) 2 and L is the dynamic range of the input image. For 8-bit images, the value of L is 256 and the value of ε is a fixed value of 0.065536. The energy function of Formula (6) can be expressed as follows: WGIF outperforms GIF in terms of image sharpening and edge highlighting because of the adjustment of the edge weight factor. Furthermore, the algorithm complexity does not increase in WGIF compared with GIF.

Detail Enhancement of Endoscopic Images
(1) Determine parameter α In this study, α is defined as the sharpening factor that controls the sharpening degree of the image. However, the factor is sensitive to noise and should be suppressed to control α. Considering this factor, the following formula is used to calculate α: where n denotes the total number of pixels of the grayscale image; I c denotes the gray image; and C represents the R, G, and B channels of the image. Endoscope images contain details, such as blood vessel information; therefore, the selected window radius should not be too large. Further, the influence of noise should be considered. Therefore, the window radius in the proposed endoscope IE algorithm was set to 16 and λ was 0.12.
(2) Quadratic improved WGF The original endoscope image typically has some noise, whereas the noise of the WGF image is partially reduced. However, some residual noise exists in various frequency bands. A quadratic improved WGF algorithm was introduced to suppress the noise to overcome this defect.
First, the original endoscope image was filtered using the improved WGF algorithm to obtain the filtered image P1 with a window radius of 16 and γ = 0.12. Next, the original image was considered as the input image and P1 as the guide image. The improved WGF was performed again.

Endoscopic Image Contrast Enhancement
In this study, an endoscopic blood vessel contrast enhancement algorithm based on spectral transformation was modified. To make the endoscopic image visually distinct, the maximization target of the tonal distance was defined as follows: where B ori , V ori represent the background and blood vessel areas of the original endoscopic image, respectively, and B en , V en represent the background and blood vessel areas of the endoscopic image after the detail enhancement algorithm, respectively. For the R channel component, the model linearly reduces its information using an attenuation factor of α. For the G and B channel components, the model acts to improve the overall information; therefore, it uses a nonlinear function to process the endoscopic image to improve its contrast. The formula is as follows: where m can control the degree of translation of the function, E can control the slope of the function, r represents the input image, and s represents the output image. The model parameters were as follows. m = 20 endoscopic images and n = 84 sets of parameters (each set of parameters are α j , L j , m j and r j , respectively) are selected for training. The steps of training parameters are as follows: (1) Input the i th image, where the image size is 316 × 258. If i is greater than m, end the training and skip to S8; otherwise, execute S2. (2) Select the vessel and background regions of the endoscopic image.
(3) Enter the j th group of parameters. If j is greater than n, skip to S7; otherwise, go to S4. (4) A stretch is performed on each of the three channels using the following formula: (5) Both the original training image and processed image are in the RGB space and now they are converted to CIE space. The conversion formula is as follows: (6) Calculate the distance B en , V en between the original image and blood vessel and the distance B ori , V ori between the original image and background of the processed image; save the ratio B en , V en / B ori , V ori in the array VecDis and go back to S3. (7) According to the maximization objective of Formula (12), a set of optimal parameters α i , L i , m i and E i of the original image can be obtained, and the optimal parameters of this set are saved in VecA, VecL, VecM and VecE, respectively; finally, go back to S2. (8) Take the average of VecA, VecL, VecM and VecE, and obtain a set of optimal parameters α best , L best , M best and E best ; finally, end the training.

Brightness Enhancement of Endoscopic Images
In the shooting process of a CMOS camera, the brightness of an endoscope image may be insufficient due to a light source problem or insufficient exposure. To solve this problem, it is imperative to enhance the brightness of the endoscope image. Li Tao et al. [24] proposed the AINDANE algorithm, which can improve the brightness of an image when the color of the endoscope image is dark due to the shooting equipment or surrounding environment. This function is mainly realized by the adaptive brightness enhancement module.
First, convert RGB images to grayscale images. The calculation process is expressed as follows: where I R (x, y), I G (x, y), and I B (x, y) represent the values of the R, G, and B channels (x, y), respectively, each of which is 8 bits. Then, normalize I(x, y) using the following formula: Mathematics 2022, 10, 1423 6 of 17 The calculation process of the nonlinear transfer function is expressed as follows: This process is called dynamic range compression, where the parameter z is related to the histogram of the image as follows: When the cumulative distribution function value is 0.1, L represents its corresponding gray value.
L is used as an indicator of the brightness of an image. If the image is dark (L < 50), brightness enhancement is required; if the image is not so dark (L ≈ 100), there is less need for brightness enhancement; if the image has sufficient brightness (L > 150), no enhancement is required. As such, the algorithm is adaptive. Figure 2 shows the cumulative distribution function of gray values at the gray level L. The transfer function is a combination of three simple functions. In Figure 3, curve 6 ( 0 z = ) is compared with the dotted line (line labeled 1), which represents the identity transformation. The first two terms are plotted as curve 2 and line 3, respectively, and their sum is curve 4. By adding the normalized curves 4 and 5 and dividing by 2, the transfer function shown in curve 6 was generated. The results showed that this transformation significantly increased the brightness of the darker regions and reduced the intensity of the brighter regions.  The transfer function is a combination of three simple functions. In Figure 3, curve 6 (z = 0) is compared with the dotted line (line labeled 1), which represents the identity transformation. The first two terms are plotted as curve 2 and line 3, respectively, and their sum is curve 4. By adding the normalized curves 4 and 5 and dividing by 2, the transfer function shown in curve 6 was generated. The results showed that this transformation significantly increased the brightness of the darker regions and reduced the intensity of the brighter regions. Figure 4 shows the contrast of the brightness enhancement effect. Three original endoscope images with low brightness were enhanced using the proposed algorithm and their brightness was significantly improved.
The transfer function is a combination of three simple functions. In Figure 3, curve 6 ( 0 z = ) is compared with the dotted line (line labeled 1), which represents the identity transformation. The first two terms are plotted as curve 2 and line 3, respectively, and their sum is curve 4. By adding the normalized curves 4 and 5 and dividing by 2, the transfer function shown in curve 6 was generated. The results showed that this transformation significantly increased the brightness of the darker regions and reduced the intensity of the brighter regions.  Figure 4 shows the contrast of the brightness enhancement effect. Three original endoscope images with low brightness were enhanced using the proposed algorithm and their brightness was significantly improved.

Removal of Highlights from Endoscopic Images
(1) Highlight spot detection First, the original image from the RGB model is converted to the HSV color mo To obtain absolutely bright areas, two thresholds s T and v T regarding saturation lightness, respectively, are employed if the area is a highlighted area as follows: However, the disadvantage of this method is that the rapid movement of endoscope lens causes the misalignment of the color channels; therefore, the detec highlights can appear white or highly saturated R, G, or B.
The G and B channels, namely, G c and B c , can be normalized as follows: Then, comparing 95% of the grayscale intensity of G c and B c with 95% of intensity of c , we obtain the following:

Removal of Highlights from Endoscopic Images
(1) Highlight spot detection First, the original image from the RGB model is converted to the HSV color model. To obtain absolutely bright areas, two thresholds T s and T v regarding saturation and lightness, respectively, are employed if the area is a highlighted area as follows: However, the disadvantage of this method is that the rapid movement of the endoscope lens causes the misalignment of the color channels; therefore, the detected highlights can appear white or highly saturated R, G, or B. The G and B channels, namely, c G and c B , can be normalized as follows: Then, comparing 95% of the grayscale intensity of c G and c B with 95% of the intensity of c E , we obtain the following: If a pixel x 0 is a highlighted area, it satisfies the following formula: Then, the algorithm detects the parts of the endoscopic image where the highlights are not too intense. Its purpose is to compare each given pixel with a smooth, nonspecial surface color at the pixel location, which is estimated by local image statistics in the endoscopic image. A slightly lower threshold T rel 2 is set for the contrast intensity using a method similar to detecting brighter highlighted areas. Owing to the robustness and edgepreserving properties of the original image to outliers, it is essential to perform median filtering on the R, G, and B channels of the original image. Each detected highlighted region is then filled to the centroid of the pixel color within a fixed distance from the region outline. Then, this region of interest is isolated by complete separation of the masks obtained by performing two dilation operations on the masks of possible highlighted locations. For the dilation operation, disk-shaped structuring elements with radii of 2 and 4 pixels are used.
By comparing pixel values in the input image and median-filtered image, highlights are found as color outliers. For this comparison, several distance measures and ratios are possible. An example of such a metric is the Euclidean distance or the infinite norm of the difference in the RGB space. During the evaluation, it was found that the maximum ratio of the three color channel intensities in the original image to the median-filtered image yielded the best results. For each pixel position x, the intensity ratio maximum is calculated as follows: where c * R (x), c * G (x), and c * B (x) denote the intensities of the median-filtered R, G, and B components, respectively.
Different color balance and contrast can cause this characteristic to vary significantly between images. These changes are compensated for using a contrast factor τ i , which is calculated for the three color channels of each given image: where c i denotes the sample mean of all pixel intensities in the color channel and s(c i ) denotes the sample standard deviation. Using the coefficients, Equation (22) is modified to obtain a contrast-compensated intensity ratio ε max (x) as follows: If this pixel is a highlighted pixel, we have The outputs of the first and second modules are connected using logical disjunctions that generate masks. The two modules complement each other well: the first module uses Mathematics 2022, 10, 1423 9 of 17 a global threshold, thus only prominent and bright specular highlights can be detected, whereas the second module detects less obvious highlighted regions by comparing the relative characteristics of surface colors. Owing to the high dynamic range of the image sensor, the second module alone can also achieve good results. However, because the sensor is easily saturated, the relative highlights become less intense such that no given area of the image is bright; thus, the first module still allows for detection in this case. Figure 5 shows the detection of the highlighted area of an endoscope. (2) Repair of highlighted areas Fill each detected highlight region with pixel color points within the range of the distance profile. Then, the Gaussian function is used to filter the image, which is similar to the median filtering after the image is filled, and, finally, a smooth image is obtained.
The binary mask of the highlighted area in the labeled image is converted to a smooth weighted mask. Smoothing is achieved by adding nonlinear attenuation to the contour of the mirror area. The weight of the pixels around the highlight in the weighted template is calculated according to the Euclidean distance from it to the contour of the highlighted area: where the logical attenuation function ranges from min  Figure 6 shows the comparison before and after the restoration of the highlighted area of the endoscope image. From the figure, the highlighted area was well repaired after using the proposed algorithm. (2) Repair of highlighted areas Fill each detected highlight region with pixel color points within the range of the distance profile. Then, the Gaussian function is used to filter the image, which is similar to the median filtering after the image is filled, and, finally, a smooth image is obtained.
The binary mask of the highlighted area in the labeled image is converted to a smooth weighted mask. Smoothing is achieved by adding nonlinear attenuation to the contour of the mirror area. The weight b of the pixels around the highlight in the weighted template is calculated according to the Euclidean distance d from it to the contour of the highlighted area: where the logical attenuation function ranges from l min to l max in the window, the distance range of the mapping is from 0 to d max , and c = 0.7. The weighted sum between the integer-weighted mask m(x), the original image c(x), and the smooth image c sm (x) after the Gaussian filtering can obtain the repaired image c inp (x). The calculation process is expressed as follows: Figure 6 shows the comparison before and after the restoration of the highlighted area of the endoscope image. From the figure, the highlighted area was well repaired after using the proposed algorithm.

Specific Steps of the Endoscope IE Algorithm
The steps of the IE algorithm are as follows: (1) Categorize the original endoscope image into R, G, and B channels.
(2) Obtain the base layer image of each channel using the quadratic improved WGF algorithm for the three channels. (3) Subtract the corresponding base layer images of R, G, and B of the three channels to obtain the images of the detailed layer of the three channels. (4) Multiply the detailed layer images of the three channels by the coefficient α to obtain the enhanced detailed layer images. (5) Add the detailed layer images and the corresponding base layer images of the three channels. Finally, merge the three channels to obtain the enhanced endoscope image.

Evaluation Method
In the endoscopic image evaluation, doctors' subjective evaluation is still the main technique that is used to verify image quality. For laparoscopic endoscopic images, the establishment of a quantitative assessment is challenging because there are no available gold standards. In this study, to verify the performance of tissue vascular, brightness, and color enhancement, we defined the evaluation indexes for verifying IE: (1) peak signal-tonoise ratio (PSNR), (2) structural similarity index (SSIM), and (3) detail variancebackground variance (DV-BV).
We used PSNR and SSIM to evaluate the image quality. In the experimental results, the higher the values of PSNR, the better the image quality. PSNR is a measure of image reconstruction quality as follows: where MSE denotes the mean square error,

Specific Steps of the Endoscope IE Algorithm
The steps of the IE algorithm are as follows: (1) Categorize the original endoscope image into R, G, and B channels.
(2) Obtain the base layer image of each channel using the quadratic improved WGF algorithm for the three channels. (3) Subtract the corresponding base layer images of R, G, and B of the three channels to obtain the images of the detailed layer of the three channels. (4) Multiply the detailed layer images of the three channels by the coefficient α to obtain the enhanced detailed layer images. (5) Add the detailed layer images and the corresponding base layer images of the three channels. Finally, merge the three channels to obtain the enhanced endoscope image.

Evaluation Method
In the endoscopic image evaluation, doctors' subjective evaluation is still the main technique that is used to verify image quality. For laparoscopic endoscopic images, the establishment of a quantitative assessment is challenging because there are no available gold standards. In this study, to verify the performance of tissue vascular, brightness, and color enhancement, we defined the evaluation indexes for verifying IE: (1) peak signal-to-noise ratio (PSNR), (2) structural similarity index (SSIM), and (3) detail variance-background variance (DV-BV).
We used PSNR and SSIM to evaluate the image quality. In the experimental results, the higher the values of PSNR, the better the image quality. PSNR is a measure of image reconstruction quality as follows: where MSE denotes the mean square error, MAX 2 i denotes the maximum possible pixel value of the picture, I(i, j) denotes the original image, and K(i, j) denotes the noise image.
SSIM is used to measure the similarity of two images, where the larger the value, the more similar the two images are.
SSI M(x, y) = [l(x, y)·c(x, y)·s(x, y)] (29) l(x, y) = 2µ x µ y + c 1 where µ denotes the mean, σ denotes the variance, and σ xy represents the covariance of x and y; c 1 = (k 1 L) 2 and c 1 = (k 2 L) 2 are two constants, with k 1 = 0.01 and k 2 = 0.03, and L is the range of image pixels. The image to be evaluated can be divided into two areas: detail and background. The local variance of the detailed area is obtained and then averaged. The detail variance (DV) can be obtained. The local variance of the background area is obtained and then averaged. The BV can be obtained. The calculation steps of the DV-BV value [25] are as follows: (1) Compute a histogram of the local variance values in the augmented image (each pixel is within a 5 × 5 field). The given threshold Tv is 5. Pixels that do not exceed this threshold are the detailed areas; otherwise, they are the background areas. (2) The DV-BV value is estimated as DV/BV. Its value is proportional to the degree of image detail enhancement.

Results
The experimental operating system was Windows 10, 64-bit. Matlab R2018b was used. The experimental dataset was provided by the Hamlyn Centre Laparoscopic/Endoscopy Video Dataset (http://hamlyn.doc.ic.ac.uk/vision/ (accessed on 2 September 2021)). with real laparoscopic images. The dataset contained 32,400 pairs of binocular endoscopic images. The size of an image was 384 × 192.
We filtered by inputting a grayscale image with three large difference grayscale values and then drew the corresponding histogram. The histogram distribution can measure the weakening of the halo. As shown in Figure 7, GIF had an obvious halo; EGIF had a small amount of halo; WGIF and GDGIF significantly improved the degree of halo reduction. The proposed algorithm was almost free of halos. Therefore, it was better than the other algorithms regarding the degree of halo reduction.
As shown in Figure 8, GIF, WGIF [26], GDGIF [27], EGIF [28], and the proposed algorithm were used to process the original endoscopic images; six endoscopic images were enhanced by the endoscopic IE algorithm. The results of the endoscopic IE in Figure 7 revealed that although GIF and others enhanced the details of blood vessels, image distortion and noise amplification also occurred, which was attributable to the low edge maintenance ability of GIF and others. The noise retained by an image was amplified by a sharpening factor α; the image detail layer was obtained while retaining the noise. The edge retention capabilities of the proposed algorithm were superior to those of GIF, WGIF, GDGIF, and EGIF. Further, the proposed algorithm was less noisy and enhanced the details of the endoscopic tissue and blood vessels. binocular endoscopic images. The size of an image was 384 × 192.
We filtered by inputting a grayscale image with three large difference grayscale values and then drew the corresponding histogram. The histogram distribution can measure the weakening of the halo. As shown in Figure 7, GIF had an obvious halo; EGIF had a small amount of halo; WGIF and GDGIF significantly improved the degree of halo reduction. The proposed algorithm was almost free of halos. Therefore, it was better than the other algorithms regarding the degree of halo reduction. As shown in Figure 8, GIF, WGIF [26], GDGIF [27], EGIF [28], and the proposed algorithm were used to process the original endoscopic images; six endoscopic images were enhanced by the endoscopic IE algorithm. The results of the endoscopic IE in Figure  7 revealed that although GIF and others enhanced the details of blood vessels, image distortion and noise amplification also occurred, which was attributable to the low edge maintenance ability of GIF and others. The noise retained by an image was amplified by a sharpening factor α ; the image detail layer was obtained while retaining the noise. The edge retention capabilities of the proposed algorithm were superior to those of GIF, WGIF, GDGIF, and EGIF. Further, the proposed algorithm was less noisy and enhanced the details of the endoscopic tissue and blood vessels. As shown in Figure 8, GIF, WGIF [26], GDGIF [27], EGIF [28], and the proposed algorithm were used to process the original endoscopic images; six endoscopic images were enhanced by the endoscopic IE algorithm. The results of the endoscopic IE in Figure  7 revealed that although GIF and others enhanced the details of blood vessels, image distortion and noise amplification also occurred, which was attributable to the low edge maintenance ability of GIF and others. The noise retained by an image was amplified by a sharpening factor α ; the image detail layer was obtained while retaining the noise. The edge retention capabilities of the proposed algorithm were superior to those of GIF, WGIF, GDGIF, and EGIF. Further, the proposed algorithm was less noisy and enhanced the details of the endoscopic tissue and blood vessels.  PSNR was used to measure the proportion of noise. Image quality is proportional to PSNR. To validate the proposed algorithm, we selected six images from the Hamlin Center Laparoscopy dataset. As shown in Table 1, the PSNR values of five algorithms were compared. Compared with GIF, WGIF, GDGI, and EGIF, the proposed quadratic WGF algorithm had the highest PSNR value, indicating that our technique made the tissue blood vessels clearer and had a better ability to suppress noise. The similarity of two images can be measured using SSIM. The similarity of images is proportional to SSIM. As shown in Table 2, the proposed algorithm had the highest SSIM value compared with GIF, WGIF, GDGI, and EGIF, indicating that the proposed algorithm could better preserve the tissue structure of endoscopic images and had a higher structural similarity. As shown in Table 3, the DV-BV value of the proposed algorithm was compared with the original endoscopic image. The DV-BV values of the images processed by the proposed algorithm were much higher than those of the raw endoscopic images, indicating that the proposed algorithm for enhancing image details and highlighting local information considerably improved the contrast. The enhancement effect was obvious. As shown in Tables 1-3, the average values obtained from the proposed improved quadratic WGF algorithm for the PSNR and SSIM were 32.6063 dB and 0.9391, respec-tively. The average DV-BV was 32.2309, which was 88% higher than that of the original image. We compared recent endoscopic IE algorithms, such as Sato et al.'s [29] endoscopic IE algorithm based on texture and color enhancement imaging (average SSIM = 0.9256), Wang et al.'s [30] vessel enhancement algorithm of nonlinear contrast stretching in multicolor space (DV-BV = 4.64, which was 54% higher than that of the original image), Qayyum

Discussion
Because there are no corresponding reference materials for clinical endoscopic images, a more professional evaluation was needed to verify the effectiveness of the endoscopic IE methods. Therefore, we invited 10 chief physicians from the Affiliated Hospital of Southwest Medical University with more than 5 years of experience in laparoscopic surgery to score the enhanced images. The subjective evaluation score standard is shown in Table 4. Subjective evaluation criteria included edge sharpening, sharpness, invariance, and acceptability. The subjective quality evaluation results of the clinicians for different algorithms are shown in Table 5; the quality score adopted the mean ± variance. Harder to spot image distortion 4 The visual effect of the image is better 5 Image is very clear In Table 4, edge sharpening, image sharpness, and acceptability were rated on a scale of 1 (worst) to 5 (best). A score of 1 indicated a nondiagnostic image, and a score of 5 indicated excellent diagnostic image quality. Meanwhile, pathological invariance was scored with 0 (change) or 1 (no change).
The evaluations of 10 clinicians are shown in Table 5; the proposed quadratic WGF algorithm obtained the best subjective quality evaluation, which was attributable to the improved unsharp masking algorithm of WGF. Images enhanced using WGF had better texture details. Compared to other filtering methods, WGF had advantages in clarity, invariance, and acceptability on the Hamlin Center laparoscopy dataset. Figure 9 shows that the proposed algorithm can better preserve the basic structure of endoscopic images, as well as enhance tissue contours and vessel details. The brightness and contrast were considerably improved, and the dehighlight effect was better, validating the feasibility of the proposed algorithm. Figure 9 shows that the proposed algorithm can better preserve the basic structure of endoscopic images, as well as enhance tissue contours and vessel details. The brightness and contrast were considerably improved, and the dehighlight effect was better, validating the feasibility of the proposed algorithm. In the subjective evaluation, objective evaluation, and comparison results, the proposed improved quadratic WGF algorithm made the endoscope image clearer and the vascular contrast was significantly enhanced. The performance of the proposed algorithm was superior to GIF, WGIF, GDGIF, and EGIF. A contrast-enhancing algorithm was used to train the endoscopic images after the detail was enhanced. By enhancing the G and B components of an image and reducing the R component, the tissue background and the tones of the blood vessels had a distinct contrast effect. Guided filtering based on the original image made the luminance and detail layers of each channel separated. The detailed layer with vascular features was enhanced. The contrast of the image was further enhanced, and a set of optimal parameters α = 0.75, m = 0.525, E = 2.15, and L = 1.241 were obtained. This ensured that the tissue background and blood vessels met the relevant requirements of the human visual angle.

Conclusions
An image enhancement algorithm based on secondary weighted guided filtering was proposed for endoscopy, which is a widely employed procedure in minimally invasive surgery. In the proposed algorithm, four modules-detail enhancement, contrast enhancement, brightness enhancement, and highlighted area removal-were simultaneously used to achieve contrast and sharpness enhancement of image vessels and contours. The modules can also be operated individually. The effectiveness of the proposed algorithm was verified using experiments. Both subjective evaluations and objective results showed that the proposed algorithm could preserve edges, reduce halo effects, and detect and remove highlights. Compared with other algorithms, the endoscopic image quality improved considerably. It was demonstrated that the proposed algorithm can be applied to endoscopic image processing. Therefore, the proposed algorithm has clinical application prospects in endoscopic image processing. Compared with other algorithms, the proposed algorithm had the least noise, but there was still some noise. In future work, we will continue to optimize the filtering algorithm. To meet the In the subjective evaluation, objective evaluation, and comparison results, the proposed improved quadratic WGF algorithm made the endoscope image clearer and the vascular contrast was significantly enhanced. The performance of the proposed algorithm was superior to GIF, WGIF, GDGIF, and EGIF. A contrast-enhancing algorithm was used to train the endoscopic images after the detail was enhanced. By enhancing the G and B components of an image and reducing the R component, the tissue background and the tones of the blood vessels had a distinct contrast effect. Guided filtering based on the original image made the luminance and detail layers of each channel separated. The detailed layer with vascular features was enhanced. The contrast of the image was further enhanced, and a set of optimal parameters α = 0.75, m = 0.525, E = 2.15, and L = 1.241 were obtained. This ensured that the tissue background and blood vessels met the relevant requirements of the human visual angle.

Conclusions
An image enhancement algorithm based on secondary weighted guided filtering was proposed for endoscopy, which is a widely employed procedure in minimally invasive surgery. In the proposed algorithm, four modules-detail enhancement, contrast enhancement, brightness enhancement, and highlighted area removal-were simultaneously used to achieve contrast and sharpness enhancement of image vessels and contours. The modules can also be operated individually. The effectiveness of the proposed algorithm was verified using experiments. Both subjective evaluations and objective results showed that the proposed algorithm could preserve edges, reduce halo effects, and detect and remove highlights. Compared with other algorithms, the endoscopic image quality improved considerably. It was demonstrated that the proposed algorithm can be applied to endoscopic image processing. Therefore, the proposed algorithm has clinical application prospects in endoscopic image processing. Compared with other algorithms, the proposed algorithm had the least noise, but there was still some noise. In future work, we will continue to optimize the filtering algorithm. To meet the real-time and accuracy requirements of clinical operations, we will conduct research on smoke removal, super-resolution reconstruction, and 3D reconstruction of endoscopic images based on deep learning methods.
Author Contributions: Conceptualization, Y.P. and W.S.; methodology and guidance of the project, W.S. and G.Z.; validation, formal analysis, and data analysis, W.S. and G.Z.; writing-G.Z., J.L., E.C., Y.P. and W.S. All authors have read and agreed to the published version of the manuscript.