Smart Image Enhancement Using CLAHE Based on an F-Shift Transformation during Decompression

: As technologies for image processing, image enhancement can provide more e ﬀ ective information for later data mining and image compression can reduce storage space. In this paper, a smart enhancement scheme during decompression, which combined a novel two-dimensional F-shift (TDFS) transformation and a non-standard two-dimensional wavelet transform (NSTW), is proposed. During the decompression, the ﬁrst coe ﬃ cient s 00 of the wavelet synopsis was used to adaptively adjust the global gray level of the reconstructed image. Next, the contrast-limited adaptive histogram equalization (CLAHE) was used to achieve the enhancement e ﬀ ect. To avoid a blocking e ﬀ ect, CLAHE was used when the synopsis was decompressed to the second-to-last level. At this time, we only enhanced the low-frequency component and did not change the high-frequency component. Lastly, we used CLAHE again after the image reconstruction. Through experiments, the e ﬀ ectiveness of our scheme was veriﬁed. Compared with the existing methods, the compression properties were preserved and the image details and contrast could also be enhanced. The experimental results showed that the image contrast, information entropy, and average gradient were greatly improved compared with the existing methods.


Introduction
For image data, the enhancement processing can provide a higher level of image features for later mining. For image decompression, existing methods normally obtain valuable information after decompression. However, for some images with poor image quality, the details and contrast of decompressed images often do not meet the requirements. Therefore, it is important to enhance the image during decompression.
At present, most image compression methods are based on a transformation domain such that the low-frequency and high-frequency components of the image can be separated. The low-frequency component of the image often contains the main information about the image, forming the basic gray level of the image, which reflects the original image. The high-frequency component constitutes the edge and detail of the image. Often for noisy signals, this will also be reflected in the high-frequency component. The existing methods [1,2] have proven the benefits of processing high-frequency and

F-Shift Transformation
The F-shift transformation [3] can construct an error-bound wavelet synopsis, which guarantees that the absolute error of each reconstructed datad is less than a given error bound ∆, that is d − d ≤ ∆ , where d is the original data. The F-shift transformation can be regarded as an extension of the Haar wavelet transformation. Wavelet transformation uses a series of different scales of wavelet to decompose the original data. After the transformation, low-frequency and high-frequency components with different wavelet scales are obtained. Unlike the Haar wavelet transformation, the F-shift transformation usually uses the data range but not the data itself to determine which inner node and what value should be obtained. In particular, the low-frequency component also consists of the data ranges at each scale. Figure 1 is an example of an F-shift error tree T with the given original data set [7,6,1,8,5,4,2,10] and an error bound ∆ = 2. For an eight-resolution image, a wavelet synopsis with an approximation coefficient (low-frequency component) of s 0 = 5.5 and detail coefficients (high-frequency component) of s 5 = −3.5 and s 7 = −4 (excluding zero detail coefficients) can be obtained after three levels of transformation.

Related Work
In this section, the F-shift transformation, TDFS, and CLAHE will be introduced systematically.

F-Shift Transformation
The F-shift transformation [3] can construct an error-bound wavelet synopsis, which guarantees that the absolute error of each reconstructed data dˆ is less than a given error bound Δ , , where d is the original data. The F-shift transformation can be regarded as an extension of the Haar wavelet transformation. Wavelet transformation uses a series of different scales of wavelet to decompose the original data. After the transformation, low-frequency and high-frequency components with different wavelet scales are obtained. Unlike the Haar wavelet transformation, the F-shift transformation usually uses the data range but not the data itself to determine which inner node and what value should be obtained. In particular, the low-frequency component also consists of the data ranges at each scale. Figure 1 is an example of an F-shift error tree T with the given original data set [7,6,1,8,5,4,2,10] and an error bound 2 = Δ . For an eight-resolution image, a wavelet synopsis with an approximation coefficient (low-frequency component) of 5   The F-shift transformation can be explained by the F-shift error tree T. For the error tree shown in Figure 1, we need to calculate the shift coefficient values of each node from bottom to up. First  The F-shift transformation can be explained by the F-shift error tree T. For the error tree shown in Figure 1, we need to calculate the shift coefficient values of each node from bottom to up. First, for each leaf node d i , we can use a data range [d i , d i ] to substitute the original data. The F-shift transformation needs to guarantee each Next, we need to determine which internal node coefficient needs to be retained. Let innode j be the inner node, and d L and d R be the left and right leaf nodes of innode j , respectively. Then, the corresponding data ranges of d L and d R are [d L , d L ] and [d R , d R ], respectively. The shift coefficient s j of innode j can be obtained using the following: This shows that the values of the two leaf nodes are quite different, and the shift coefficient s j of their parent node innode j needs to be retained, which is: where n is the original data size. Then, the updated data range d j , d j of innode j should be stored temporarily. That is: This shows that the difference between the two leaf nodes is small and within the given error bound. Therefore, the shift coefficient s j of the parent node innode j does not need to be retained and s j can be directly expressed by 0. Thus, within a given error bound, the number We named this procedure the one-step F-shift transformation. According to the above steps, the shift coefficient (high-frequency component) and the data range (low-frequency component) of each parent node can be calculated. Then, the parent nodes of each layer of the error tree T can be regarded as leaf nodes such that the shift coefficients and the data ranges of each layer can be calculated iteratively using a one-step F-shift transformation. Lastly, the value of the root node can be given by any value of its child node's data range. Usually, we take the average value of the data range. That is: As shown in Figure 1, a one-step F-shift transformation is performed on each leaf node to obtain a set of a low-frequency component and a high-frequency component: { [5,8], [2.5,6.5], [3,6], [4,8],0, −3.5,0,−4}. Next, the one-step F-shift transformation is repeated on the low-frequency component until one data range is left in the low-frequency component. Lastly, we usually select the average of the data range as the final approximation coefficient. Table 1 shows the shift coefficients of each level of the F-shift transformation. After calculating the coefficient value of each node, a shift error tree is formed. The reconstruction value of each leaf node can be computed using: whered (S) i is the reconstructed data from the synopsis S. We set δ ij = +1 when d i is located on the left subtree of s j and set δ ij = −1 when d i is located on the right subtree of s j . S is the set of shift coefficients. path(d i ) is the node set that is located on the path from the root node to d i (not including d i ).

Two Dimensional F-Shift Transformation (TDFS)
The core of the TDFS [27] is to alternately do one-step F-shift transformations in each dimension. We can obtain the low-frequency and high-frequency components following a one-step F-shift transformation. Specifically, the TDFS first performs the a one-step F-shift transformation on each row. After that, we can get the low-frequency component, which is composed of the updated data range. Similarly, the same transformation should be performed on each column of the updated low-frequency component. The above steps of performing a one-step F-shift transformation to rows and columns, respectively, are called the first-level TDFS. Iteratively, the second-level TDFS, third-level TDFS, and so on, can be obtained until only one data range remains. Finally, we usually take the average of the last data range as the approximate value. Figure 2 is an example of the TDFS. For the resolution of a 4 × 4 image, we need to perform 2 levels of transformation to complete the TDFS. Figure 2a is the original data array, and ∆ = 2. Figure 2b,c Electronics 2020, 9, 1374 5 of 19 are the results of the first-level TDFS and the second-level TDFS, respectively. The shaded parts of the figures show the updated low-frequency component. Figure 2d is the final compression result. The shaded part is the approximate value. data range. Similarly, the same transformation should be performed on each column of the updated low-frequency component. The above steps of performing a one-step F-shift transformation to rows and columns, respectively, are called the first-level TDFS. Iteratively, the second-level TDFS, third-level TDFS, and so on, can be obtained until only one data range remains. Finally, we usually take the average of the last data range as the approximate value. Figure 2 is an example of the TDFS. For the resolution of a 4 4 × image, we need to perform 2 levels of transformation to complete the TDFS. Figure 2a is the original data array, and 2 = Δ . Figure 2b,c are the results of the first-level TDFS and the second-level TDFS, respectively. The shaded parts of the figures show the updated low-frequency component. Figure 2d is the final compression result. The shaded part is the approximate value. Approx.
Low-frequency component

Contrast-Limited Adaptive Histogram Equalization (CLAHE)
The image histogram can show the pixel distribution of an image. By redistributing the distribution of the image histogram, the image contrast can be changed. Histogram equalization is actually a mapping transformation of the gray level of the original image, which can enlarge the dynamic range of the pixel gray value. Thus, the image contrast can be enhanced.
Suppose the grayscale range of an image is [0,n−1], with a gray level of m . Let ) (r p r be the gray-level probability density function (PDF). Then, the probability of the kth gray level is [28]: where 1 ,..., 2 , 1 , 0 − = m k and k r stands for the kth grayscale. Then, the cumulative distribution function (CDF) ) ( k r T can be expressed as: Operating on the entire image as an image block is called global histogram equalization. Since the grayscale changes in different areas of a picture are different, if the global histogram is used, the local changes of the image will be ignored. The equalization of image regions is an adaptive

Contrast-Limited Adaptive Histogram Equalization (CLAHE)
The image histogram can show the pixel distribution of an image. By redistributing the distribution of the image histogram, the image contrast can be changed. Histogram equalization is actually a mapping transformation of the gray level of the original image, which can enlarge the dynamic range of the pixel gray value. Thus, the image contrast can be enhanced.
Suppose the grayscale range of an image is [0,n−1], with a gray level of m. Let p r (r) be the gray-level probability density function (PDF). Then, the probability of the kth gray level is [28]: where k = 0, 1, 2, . . . , m − 1 and r k stands for the kth grayscale. Then, the cumulative distribution function (CDF) T(r k ) can be expressed as: where 0 ≤ g ≤ 1.
Operating on the entire image as an image block is called global histogram equalization. Since the grayscale changes in different areas of a picture are different, if the global histogram is used, the local changes of the image will be ignored. The equalization of image regions is an adaptive histogram equalization (AHE) algorithm, which combines the advantages of global histogram equalization and considers the local contrast. This algorithm divides the image into several small regions first, then the CDF and image histogram are calculated from each small region, and finally, HE is performed on the pixels of each small region. Unfortunately, AHE tends to over-amplify noise in the relatively uniform regions of the image. To solve this problem, the CLAHE method was proposed. It has two main characteristics: on one hand, it is a method to limit the distribution of histograms to prevent excessive enhancement of noise points, while on the other hand, it uses interpolation to accelerate the histogram equalization. The steps of CLAHE [13] are: (1) Split the input image into continuous and non-overlapping regions. The region size is generally set to 8 × 8.
(2) Get the histogram of each region and use the threshold to clip the histogram. The CLAHE algorithm achieves the goal of limiting the magnification by clipping the histogram with a pre-defined threshold before calculating the CDF. This also limits the slope of the transformation function.
(3) Reallocate the pixel values, and distribute the clipped pixel values evenly below the histogram. Figure 3a,b are the histogram distribution before and after clipping, respectively. noise in the relatively uniform regions of the image. To solve this problem, the CLAHE method was proposed. It has two main characteristics: on one hand, it is a method to limit the distribution of histograms to prevent excessive enhancement of noise points, while on the other hand, it uses interpolation to accelerate the histogram equalization. The steps of CLAHE [13] are: (1) Split the input image into continuous and non-overlapping regions. The region size is generally set to 8 8 × .
(2) Get the histogram of each region and use the threshold to clip the histogram. The CLAHE algorithm achieves the goal of limiting the magnification by clipping the histogram with a pre-defined threshold before calculating the CDF. This also limits the slope of the transformation function.
(3) Reallocate the pixel values, and distribute the clipped pixel values evenly below the histogram. Figure 3a,b are the histogram distribution before and after clipping, respectively.
, and ) ( 4 s g P , respectively. For the pixels in the corners, the new gray value is equal to the gray-level mapping of s of this region. For example: For the pixels on the edges, the new gray value is the interpolation of the gray-level mapping of s of the two samples of the surrounding regions. For example: For the pixels of the center of the image, the new gray value is the interpolation of the gray-level mapping of s of the four samples of the surrounding regions. For example: where α and β are the normalized distances with respect to the point P1.

Proposed Method
Our smart image enhancement is performed in the process of image decompression. Therefore, image compression is a pre-required phase. The image compression method uses the TDFS [27], (4) Perform a local histogram equalization on all regions. (5) Use a linear interpolation for pixel value reconstruction. Suppose the gray value of the sample point P is s, and the new gray value after a linear interpolation is s'. The sample points of its surrounding regions are P 1 , P 2 , P 3 , and P 4 , and the gray-level mappings of s are g P 1 (s), g P 2 (s), g P 3 (s), and g P 4 (s), respectively. For the pixels in the corners, the new gray value is equal to the gray-level mapping of s of this region. For example: For the pixels on the edges, the new gray value is the interpolation of the gray-level mapping of s of the two samples of the surrounding regions. For example: For the pixels of the center of the image, the new gray value is the interpolation of the gray-level mapping of s of the four samples of the surrounding regions. For example: where α and β are the normalized distances with respect to the point P 1 .

Proposed Method
Our smart image enhancement is performed in the process of image decompression. Therefore, image compression is a pre-required phase. The image compression method uses the TDFS [27], which is the two-dimensional F-shift transformation. In order to obtain a compromise between the compression effect and the image enhancement effect, we only performed the first-level TDFS on the image. The later compression uses a non-standard two-dimensional wavelet transform (NSTW) [4]. Figure 4 shows the image compression process we performed before the image enhancement. In this example, the error bound ∆ was still set to 2. Figure 4a shows the original image pixels with a resolution of 4 × 4. Figure 4b is the result obtained after performing the first-level TDFS on the original data. Figure 4c is the approximation values of the low-frequency component shown in Figure 4b. Here, the approximate value of each point was the average of the corresponding data range, which is shown in the shaded part of Figure 4c. Figure 4d is the result after performing NSTW on the approximate part of Figure 4c. It should be noted that when we performed the NSTW, the detailed coefficients were obtained by dividing 2, instead of dividing by √ 2. This was because we wanted to get coefficient values of the same order as the TDFS coefficients.
image. The later compression uses a non-standard two-dimensional wavelet transform (NSTW) [4]. Figure 4 shows the image compression process we performed before the image enhancement. In this example, the error bound Δ was still set to 2. Figure 4a shows the original image pixels with a resolution of 4 4 × . Figure 4b is the result obtained after performing the first-level TDFS on the original data. Figure 4c is the approximation values of the low-frequency component shown in Figure 4b. Here, the approximate value of each point was the average of the corresponding data range, which is shown in the shaded part of Figure 4c. Figure 4d is the result after performing NSTW on the approximate part of Figure 4c. It should be noted that when we performed the NSTW, the detailed coefficients were obtained by dividing 2, instead of dividing by 2 . This was because we wanted to get coefficient values of the same order as the TDFS coefficients. Approx. After obtaining the compressed data, the steps of our enhanced method were as follows: Step 1. Adjust the first coefficient 00 s of the synopsis S using the adaptive coefficient adjustment formula. Note that, as shown in Figure 4d, the wavelet synopsis S was a set of the non-zero values and the first coefficient 00 s was 5.375.
Step 2: Incompletely decompress the synopsis S and enhance the low-frequency component. Note that here we decompressed the synopsis to the second-to-last level rather than the final level.
Step 3: Complete the decompression and further enhancement. In this step, the synopsis was decompressed to the final level. Figure 5 shows the flowchart of the proposed method. After obtaining the compressed data, the steps of our enhanced method were as follows: Step 1. Adjust the first coefficient s 00 of the synopsis S using the adaptive coefficient adjustment formula. Note that, as shown in Figure 4d, the wavelet synopsis S was a set of the non-zero values and the first coefficient s 00 was 5.375.
Step 2: Incompletely decompress the synopsis S and enhance the low-frequency component. Note that here we decompressed the synopsis to the second-to-last level rather than the final level.
Step 3: Complete the decompression and further enhancement. In this step, the synopsis was decompressed to the final level. Figure 5 shows the flowchart of the proposed method.

Adaptive Coefficient Adjustment
During the decompression, the first step was to adjust the image brightness to a suitable brightness such that the overall brightness of the image was not too bright or too dark. In our previous work [29], as well as in our current work, we carried out experiments on medical images and ordinary images. We first evaluated 120 images subjectively and divided them into three types: underexposed images, moderately exposed images, and overexposed images. Through the experiment, we found that when the cut-off values were 90 and 150, the classification accuracy was higher than 85 and 170 (that is, equally dividing 0 to 255 into three bins). Therefore, through

Adaptive Coefficient Adjustment
During the decompression, the first step was to adjust the image brightness to a suitable brightness such that the overall brightness of the image was not too bright or too dark. In our previous work [29], as well as in our current work, we carried out experiments on medical images and ordinary images.
We first evaluated 120 images subjectively and divided them into three types: underexposed images, moderately exposed images, and overexposed images. Through the experiment, we found that when the cut-off values were 90 and 150, the classification accuracy was higher than 85 and 170 (that is, equally dividing 0 to 255 into three bins). Therefore, through experimental experience, we chose 90 and 150 as the cut-off points for an underexposed image and an overexposed image.
According to the properties of the previous compression scheme, the first coefficient of the wavelet synopsis essentially represents the mean of the gray value of the original image. Therefore, it is a representation of the image brightness. In this way, the image brightness can be adjusted through the first coefficient s 00 . The adaptive coefficient adjustment equation is: where λ is the adjustment factor and s 00 is the updated approximation coefficient. Remark 1: If the value of s 00 is changed, the image gray value will be affected when decompressed, e.g., given an increment µ to s 00 , then each pixel will be increased by µ.
From the data reconstruction Equation (6), we can see that s 00 has a global role in data reconstruction. The reconstruction of each pixel needs to add the value of s 00 . Therefore, if we increase s 00 by µ, then each pixel will be increased by µ.
Remark 2: The transformation of the image gray value caused by the change of s 00 does not change the high-frequency components.
The change of s 00 only affects the reconstructed value of the original data, and the coefficients of the internal nodes of the error tree will not be affected. Therefore, the transformation of the image gray value caused by the change of s 00 does not change the high-frequency components.

Incomplete Decompression and Enhancing the Low-Frequency Component
Suppose the original image data size is n × n, where n = 2 m . Therefore, a complete compression of this image requires m levels of transformation. After adjusting the first coefficient s 00 , we need to decompress the image. To alleviate the blocking effect, we chose to enhance the decompressed synopsis when decompressing to a level of m − 1 (the second-to-last level).
When the synopsis is decompressed to this level, the whole frame and main information of the image are contained in the low-frequency component. Then, CLAHE can be used to enhance the low-frequency component. This is the first time to use CLAHE to achieve the enhancement effect. Usually, the noise information exists in the high-frequency component; therefore, we kept the high-frequency component unchanged in this step to suppress the noise signal.

Complete Decompression and Further Enhancement
After the above steps, we will get the updated low-frequency component and the unchanged high-frequency component. At this point, only the last level of decompression is needed to get the decompressed image. After getting the reconstructed image, CLAHE is needed to enhance the image to make its detail more abundant. component will lose more and more detailed information. From the experimental results, we see that as the error bound increased, the brightness, contrast, and details of the image became weaker. For the images of Figures 6 and 7, different error bounds had little effect on the image quality, while in the image of Figure 8, the image quality was very different and the margin was different. It can be seen from the results that for images with lower brightness, the enhancement effect was greatly affected by the error bound. In order to obtain a higher quality image, the error bound needed to be smaller.

Impact of the Error Bound on the Enhancement and Compression Results
Figures 6-8 show the enhanced images under different error bounds with 255 150 00 ≤ < s , 150 90 00 ≤ ≤ s , and 90 0 00 < ≤ s . Theoretically, as the error bound increases, the high-frequency component will lose more and more detailed information. From the experimental results, we see that as the error bound increased, the brightness, contrast, and details of the image became weaker. For the images of Figures 6 and 7, different error bounds had little effect on the image quality, while in the image of Figure 8, the image quality was very different and the margin was different. It can be seen from the results that for images with lower brightness, the enhancement effect was greatly affected by the error bound. In order to obtain a higher quality image, the error bound needed to be smaller.   Figure 9 shows the trend of the compression effect and image quality under different error bounds in the images in Figures 6a, 7a, and 8a. The step of the error bound was 1. Here, the peak signal-to-noise ratio (PSNR) was used to evaluate the enhancement effect, and the data reduction rate was used to measure the compression effect. The reduction rate r is defined as: where size synopses _ represents the size of the synopsis S obtained using our method and size data original _ _ represents the size of the original image. We chose the enhanced image of 0 = Δ as a reference image for calculating the PSNR. The selection of this parametric setting was due to the loss of details being the least when 0 = Δ . From Figure 9, we found that as the error bound increased, the PSNR of the image decreased and the reduction rate increased. We also found that , and (e) 8 = Δ . Figure 9 shows the trend of the compression effect and image quality under different error bounds in the images in Figures 6a, 7a, and 8a. The step of the error bound was 1. Here, the peak signal-to-noise ratio (PSNR) was used to evaluate the enhancement effect, and the data reduction rate was used to measure the compression effect. The reduction rate r is defined as: where size synopses _ represents the size of the synopsis S obtained using our method and size data original _ _ represents the size of the original image. We chose the enhanced image of 0 = Δ as a reference image for calculating the PSNR. The selection of this parametric setting was due to the loss of details being the least when 0 = Δ . From Figure 9, we found that as the error bound increased, the PSNR of the image decreased and the reduction rate increased. We also found that images with a low brightness were more sensitive to the error bound. We found that a high   Figure 6a, Figure 7a, and Figure 8a. The step of the error bound was 1. Here, the peak signal-to-noise ratio (PSNR) was used to evaluate the enhancement effect, and the data reduction rate was used to measure the compression effect. The reduction rate r is defined as: where synopses_size represents the size of the synopsis S obtained using our method and original_data_size represents the size of the original image. We chose the enhanced image of ∆ = 0 as a reference image for calculating the PSNR. The selection of this parametric setting was due to the loss of details being the least when ∆ = 0. From Figure 9, we found that as the error bound increased, the PSNR of the image decreased and the reduction rate increased. We also found that images with a low brightness were more sensitive to the error bound. We found that a high compression effect and high image quality could not be achieved at the same time. Therefore, an appropriate error bound should be chosen to reach a compromise between the compression effect and image quality. In practical applications, this error bound is obtained through experience.

Comparison of the Enhancement Effect of Different Methods
In this section, we compared our method with CLAHE [13] and CLAHE_DWT [25]. Figures  10-15 show the images enhanced with different methods at different brightness levels. More concretely, Figures 10 and 11 show the enhanced images with 255 150 00 ≤ < s , Figures 12 and 13 show the enhanced images with 150 90 00 ≤ ≤ s , and Figures 14 and 15 show the enhanced images with 90 0 00 < ≤ s .
The error bound was set to 5, 5, 5, 5, 3, and 3 for Figures 10-15, respectively. According to Figures 10-15, we observed that the overall brightness of these images was greatly improved. It can be seen from the results that the image visual effects obtained using our method were better than CLAHE [13] and CLAHE_DWT [25], and the better visual effect was not only reflected in the image contrast but also in the image details.

Comparison of the Enhancement Effect of Different Methods
In this section, we compared our method with CLAHE [13] and CLAHE_DWT [25]. Figures 10-15 show the images enhanced with different methods at different brightness levels. More concretely, Figures 10 and 11 show the enhanced images with 150 < s 00 ≤ 255, Figures 12 and 13 show the enhanced images with 90 ≤ s 00 ≤ 150, and Figures 14 and 15 show the enhanced images with 0 ≤ s 00 < 90.

Comparison of the Enhancement Effect of Different Methods
In this section, we compared our method with CLAHE [13] and CLAHE_DWT [25]. Figures  10-15 show the images enhanced with different methods at different brightness levels. More concretely, Figures 10 and 11 show the enhanced images with 255 150 00 ≤ < s , Figures 12 and 13 show the enhanced images with 150 90 00 ≤ ≤ s , and Figures 14 and 15 show the enhanced images with 90 0 00 < ≤ s .
The error bound was set to 5, 5, 5, 5, 3, and 3 for Figures 10-15, respectively. According to Figures 10-15, we observed that the overall brightness of these images was greatly improved. It can be seen from the results that the image visual effects obtained using our method were better than CLAHE [13] and CLAHE_DWT [25], and the better visual effect was not only reflected in the image contrast but also in the image details.       [13], (c) CLAHE_DWT [25], and (d) our method.
Generally, there are two types of image quality evaluation methods: one is the full-reference image quality evaluation (e.g., PSNR, structural similarity index measurement (SSIM)), and the other is the non-reference image quality evaluation (e.g., mean, standard deviation (SD), entropy, average gradient (AG)). Because there was no ideal reference image, the non-reference image quality evaluation was more suitable for the comparison of image enhancement effects. Table 2 shows the mean [30,31], SD [30,32,33], entropy [31,32,34], and AG [31,32] of the images obtained using the above methods, which represent the average brightness, contrast, detail richness, and clarity of the image, respectively. The above evaluation parameters can be obtained using the following formula:  The error bound was set to 5, 5, 5, 5, 3, and 3 for Figures 10-15, respectively. According to Figures 10-15, we observed that the overall brightness of these images was greatly improved. It can be seen from the results that the image visual effects obtained using our method were better than CLAHE [13] and CLAHE_DWT [25], and the better visual effect was not only reflected in the image contrast but also in the image details.
Generally, there are two types of image quality evaluation methods: one is the full-reference image quality evaluation (e.g., PSNR, structural similarity index measurement (SSIM)), and the other is the non-reference image quality evaluation (e.g., mean, standard deviation (SD), entropy, average gradient (AG)). Because there was no ideal reference image, the non-reference image quality evaluation was more suitable for the comparison of image enhancement effects. Table 2 shows the mean [30,31], SD [30,32,33], entropy [31,32,34], and AG [31,32] of the images obtained using the above methods, which represent the average brightness, contrast, detail richness, and clarity of the image, respectively. The above evaluation parameters can be obtained using the following formula: where Mean is the mean of the image. m is the number of rows and n is the number of columns. d ij is the gray value of each pixel. SD is the standard deviation of the image. Entropy is the entropy of the image. p(k) stands for the kth gray level probability. ∇ i and ∇ j stand for the gradient of the horizontal and vertical direction, where ∇ i = d ij − d (i−1) j and ∇ j = d ij − d i( j−1) , and i and j are the row and the column numbers.
In Table 2, the image brightness can be altered to a suitable range based on our method. We show the best results in bold, and the best value of Mean is the maximum value in the range of [90,150]. In most cases, the SD, entropy, and AG of the algorithm are greater than the other two algorithms and the original image, which was consistent with our subjective feelings. Therefore, both subjective and objective results show that our algorithm can get better image contrasts and more image details. In addition, our algorithm can also achieve data compression. Through calculations for the images in Figures 10-15, the reduction rates were 76.47%, 43.19%, 55.88%, 59.73%, 71.95%, and 75.03%, respectively.

Method Validation
To further prove the effectiveness of our method, we propose several schemes to verify it. Scheme 1: This scheme can be simply described as s 00 adjustment plus CLAHE for the low-frequency component.
Specifically, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed to the second-to-last level. After performing CLAHE for the decompressed low-frequency component, the synopsis is decompressed completely and the final enhanced image is obtained. Scheme 2: This scheme can be simply described as s 00 adjustment plus CLAHE for a completely decompressed image.
Specifically, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed completely and the final enhanced image is obtained by using CLAHE for the decompressed image.
Scheme 3: This scheme can be simply described as two-stage CLAHE for the original image. Specifically, one CLAHE enhancement is used after another CLAHE enhancement of the original image.
Scheme 4: This scheme can be simply described as s 00 adjustment plus two-stage CLAHE for the low-frequency component.
Specifically, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed to the second-to-last level. After that, a two-stage CLAHE enhancement is used for the low-frequency component. Lastly, the enhanced image is obtained using the complete decompression. Scheme 5: This scheme can be simply described as s 00 adjustment plus two-stage CLAHE for a complete decompressed image.
Specifically, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed completely. After that, a two-stage CLAHE enhancement is used for the complete decompressed image. Finally, the enhanced image can be obtained.
Our method: Our method can be simply described as s 00 adjustment plus CLAHE for the low-frequency component plus CLAHE for the complete decompressed image.
That is, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed to the second-to-last level. After the CLAHE enhancement for the decompressed low-frequency component, the synopsis is decompressed completely. Finally, the CLAHE enhancement is used again for the complete decompressed image and the enhanced image can be obtained. Figures 16-19 are comparisons between the above schemes and our method. From the experimental results of schemes 1 and 2, we found that compared with the scheme of using CLAHE directly on the original image, the s 00 adjustment and CLAHE during the decompression play positive roles in the image enhancement. From the results of schemes 3-5 and our method, we found that using CLAHE twice can get a better enhancement effect than schemes 1-2 and CLAHE. For the schemes using CLAHE twice, we found that the image enhancement effect obtained using our method was the best. For scheme 4, using CLAHE twice to enhance the low-frequency component may reduce the effect of the high-frequency component, which makes the reconstructed image not clear enough. For scheme 5, using CLAHE twice for the decompressed image could not effectively utilize the advantages of separately processing the low-frequency and high-frequency components. This was because the low-frequency component usually has very large information entropy, while the high-frequency component usually has a smaller information entropy. Excessive enhancement of the low-frequency component or too little enhancement will reduce the enhancement effect. Table 3 shows the objective evaluation indicators of the comparison schemes. The best results are shown in bold. It can be seen from Table 3 that our method can achieve the best results on more indicators than other schemes. Those objective indicators also conformed to our subjective feelings. Therefore, our method demonstrated its advantages both subjectively and objectively.  (e) (f) (g) (h)   (e) (f) (g) (h)

Conclusions
In this study, we developed a smart image enhancement method during decompression and designed a variety of schemes to verify the effectiveness of our method. We obtained the compressed data by using the first-level TDFS and NSTW for the original image. During decompression, the overall brightness of the image was adaptively adjusted using the first coefficient s 00 . To improve the image contrast and enrich the image details, we implemented the CLAHE enhancement for the low-frequency component and reconstructed images during decompression. The results showed that this enhancement strategy could effectively utilize the advantages of separately processing the low-frequency and high-frequency components and balance the enhancement effect. The experimental results showed that compared with the methods of CLAHE [13] and CLAHE_DWT [25], our scheme could further improve the image contrast while making the image details more abundant. The proposed methods provided in this paper further extend our previous work [26], and a comprehensive model was suggested for smart image enhancement. This smart image enhancement method can be further applied to intelligent image processing applications, such as face recognition and smart system inspection. Our future work will focus on applying the above-mentioned smart image enhancement method to smart city surveillance and rapid responses to potential pandemic outbreaks.