Next Article in Journal
Recent Developments and State of the Art in Flexible and Conformal Reconfigurable Antennas
Next Article in Special Issue
Decoding Strategies for Improving Low-Resource Machine Translation
Previous Article in Journal
Reformulating the Binary Masking Approach of Adress as Soft Masking
Previous Article in Special Issue
Outdoor Particulate Matter Correlation Analysis and Prediction Based Deep Learning in the Korea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smart Image Enhancement Using CLAHE Based on an F-Shift Transformation during Decompression

1
Department of Mathematics and Physics, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
2
Institute of Applied Mathematics, Hebei Academy of Sciences, Shijiazhuang 050081, China
3
Hebei Authentication Technology Engineering Research Center, Shijiazhuang 050081, China
4
Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
5
The Center for Social Computing and Data Management (SCDM), Ningbo Institute of Technology (NIT), Zhejiang University, Ningbo 315100, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2020, 9(9), 1374; https://doi.org/10.3390/electronics9091374
Submission received: 22 July 2020 / Revised: 13 August 2020 / Accepted: 18 August 2020 / Published: 25 August 2020
(This article belongs to the Special Issue Smart Processing for Systems under Uncertainty or Perturbation)

Abstract

:
As technologies for image processing, image enhancement can provide more effective information for later data mining and image compression can reduce storage space. In this paper, a smart enhancement scheme during decompression, which combined a novel two-dimensional F-shift (TDFS) transformation and a non-standard two-dimensional wavelet transform (NSTW), is proposed. During the decompression, the first coefficient s00 of the wavelet synopsis was used to adaptively adjust the global gray level of the reconstructed image. Next, the contrast-limited adaptive histogram equalization (CLAHE) was used to achieve the enhancement effect. To avoid a blocking effect, CLAHE was used when the synopsis was decompressed to the second-to-last level. At this time, we only enhanced the low-frequency component and did not change the high-frequency component. Lastly, we used CLAHE again after the image reconstruction. Through experiments, the effectiveness of our scheme was verified. Compared with the existing methods, the compression properties were preserved and the image details and contrast could also be enhanced. The experimental results showed that the image contrast, information entropy, and average gradient were greatly improved compared with the existing methods.

1. Introduction

For image data, the enhancement processing can provide a higher level of image features for later mining. For image decompression, existing methods normally obtain valuable information after decompression. However, for some images with poor image quality, the details and contrast of decompressed images often do not meet the requirements. Therefore, it is important to enhance the image during decompression.
At present, most image compression methods are based on a transformation domain such that the low-frequency and high-frequency components of the image can be separated. The low-frequency component of the image often contains the main information about the image, forming the basic gray level of the image, which reflects the original image. The high-frequency component constitutes the edge and detail of the image. Often for noisy signals, this will also be reflected in the high-frequency component. The existing methods [1,2] have proven the benefits of processing high-frequency and low-frequency components separately. The compression technology in this paper is based on an F-shift transform [3] and a Haar wavelet transform [4]. Therefore, high-frequency and low-frequency components can be processed separately in the decompression process to keep the compression properties unchanged.
Image enhancement is used to highlight the characteristics of target information, minimize noise, improve the clarity of detail information, and make it more conducive to the subsequent analysis and application of images. At present, after many years of development, many algorithms have been presented for image enhancement technology. Although these algorithms have a certain processing effect, they also have their shortcomings. In terms of whether to process the pixel value directly, the methods of image enhancement can be roughly divided into two types: image enhancement in the spatial domain and image enhancement in the transform domain. The former processes the pixel value directly, which includes grayscale transformation [5], histogram processing [6,7,8], and Retinex methods [9,10]. Histogram transformation is one of the classical spatial transformation methods. For images with an uneven grayscale distribution, this method makes the gray distribution of the image more uniform and more distinct. Through histogram equalization of local regions, adaptive histogram equalization (AHE) reduces the loss of detail [11,12]. However, AHE will over-amplify the noise. In order to solve this problem, the contrast-limited adaptive histogram equalization (CLAHE) [13,14] method was proposed. Compared with ordinary adaptive histogram equalization, CLAHE differs in the process of histogram distribution in that it uses the clipping histogram to equalize the image.
A transformation domain enhancement algorithm mainly transforms images from the spatial domain to another domain, then performs corresponding operations on image coefficients according to the unique attributes of the domain, and finally converts the obtained image back into the spatial domain. At present, the enhancement methods based on a transformation domain consist of discrete Fourier transform [15,16], discrete cosine transform (DCT) [17], discrete wavelet transform [18,19], etc. The common feature of the above transformation methods is that they can separate the high-frequency component from the low-frequency component of the image signal. Therefore, how to process the high-frequency component and the low-frequency component has become a key issue [20,21,22]. Usually, filters are used to process different components. The low-pass filter can attenuate the high-frequency component to achieve the purpose of denoising and smoothing the image while the boundary of the image is suppressed [23]. In contrast, a high-pass filter will make the image boundary clear. However, the useful information contained in the low-frequency component will be lost [16]. Therefore, it is difficult to achieve satisfactory results by using a single enhancement method. Therefore, a spatial domain method combined with a transformation domain method can complement each other [24,25].
Because the compressed image technology used in this paper is based on an F-shift transformation, it can realize the separation of high- and low-frequency components. On this basis, we present a new method for the decompression by utilizing the F-shift transformation and CLAHE. The proposed method is beneficial for managing high-frequency components and low-frequency components separately. At the same time, using a CLAHE enhancement at the appropriate time can balance the effect of low-frequency enhancement and high-frequency components on the overall image enhancement.
This study extended the conceptual model proposed in Fan et al. [26] by providing a concrete review of existing work and more accurate experimental results. In this study, we further developed the benchmarks for the analysis of the experimental results and extended the theoretical and methodological contents that were conceptualized in the previous work [26]. The rest of this article is organized as follows: Section 2 introduces the F-shift transformation, two-dimensional F-shift (TDFS) method, and CLAHE method related to this article. Section 3 gives the details of our method. Section 4 gives the experimental results and Section 5 summarizes the full paper.

2. Related Work

In this section, the F-shift transformation, TDFS, and CLAHE will be introduced systematically.

2.1. F-Shift Transformation

The F-shift transformation [3] can construct an error-bound wavelet synopsis, which guarantees that the absolute error of each reconstructed data d ^ is less than a given error bound Δ , that is | d ^ - d | Δ , where d is the original data. The F-shift transformation can be regarded as an extension of the Haar wavelet transformation. Wavelet transformation uses a series of different scales of wavelet to decompose the original data. After the transformation, low-frequency and high-frequency components with different wavelet scales are obtained. Unlike the Haar wavelet transformation, the F-shift transformation usually uses the data range but not the data itself to determine which inner node and what value should be obtained. In particular, the low-frequency component also consists of the data ranges at each scale.
Figure 1 is an example of an F-shift error tree T with the given original data set [7, 6, 1, 8, 5, 4, 2, 10] and an error bound Δ = 2 . For an eight-resolution image, a wavelet synopsis with an approximation coefficient (low-frequency component) of s 0 = 5.5 and detail coefficients (high-frequency component) of s 5 = 3.5 and s 7 = 4 (excluding zero detail coefficients) can be obtained after three levels of transformation.
The F-shift transformation can be explained by the F-shift error tree T. For the error tree shown in Figure 1, we need to calculate the shift coefficient values of each node from bottom to up. First, for each leaf node d i , we can use a data range [ d _ i , d ¯ i ] to substitute the original data. The F-shift transformation needs to guarantee each | d i d ^ i | Δ , where d ^ i is the reconstructed data of d i . Therefore, [ d _ i , d ¯ i ] can be expressed as [ d i Δ , d i + Δ ] . Next, we need to determine which internal node coefficient needs to be retained. Let i n n o d e j be the inner node, and d L and d R be the left and right leaf nodes of i n n o d e j , respectively. Then, the corresponding data ranges of d L and d R are [ d _ L , d ¯ L ] and [ d _ R , d ¯ R ] , respectively. The shift coefficient s j of i n n o d e j can be obtained using the following:
Case 1: [ d _ L , d ¯ L ] [ d _ R , d ¯ R ] = ϕ . This shows that the values of the two leaf nodes are quite different, and the shift coefficient s j of their parent node i n n o d e j needs to be retained, which is:
s j = ( d _ L + d ¯ L ) ( d _ R + d ¯ R ) 4 , ( 0 < j < n ) ,
where n is the original data size. Then, the updated data range [ d _ j , d ¯ j ] of i n n o d e j should be stored temporarily. That is:
d _ j = max { d _ L s j , d _ R + s j } ,
d ¯ j = min { d ¯ L s j , d ¯ R + s j } .
Case 2: [ d _ L , d ¯ L ] [ d _ R , d ¯ R ] ϕ . This shows that the difference between the two leaf nodes is small and within the given error bound. Therefore, the shift coefficient s j of the parent node i n n o d e j does not need to be retained and s j can be directly expressed by 0. Thus, within a given error bound, the number of internal nodes can be reduced to achieve data compression. However, the update data range [ d _ j , d ¯ j ] of i n n o d e j should be stored temporarily. That is:
[ d _ j , d ¯ j ] = [ d _ L , d ¯ L ] [ d _ R , d ¯ R ] .
We named this procedure the one-step F-shift transformation. According to the above steps, the shift coefficient (high-frequency component) and the data range (low-frequency component) of each parent node can be calculated. Then, the parent nodes of each layer of the error tree T can be regarded as leaf nodes such that the shift coefficients and the data ranges of each layer can be calculated iteratively using a one-step F-shift transformation. Lastly, the value of the root node can be given by any value of its child node’s data range. Usually, we take the average value of the data range. That is:
s 0 = d _ L + d ¯ R 2 .
As shown in Figure 1, a one-step F-shift transformation is performed on each leaf node to obtain a set of a low-frequency component and a high-frequency component: {[5,8],[2.5,6.5],[3,6],[4,8],0, −3.5,0,−4}. Next, the one-step F-shift transformation is repeated on the low-frequency component until one data range is left in the low-frequency component. Lastly, we usually select the average of the data range as the final approximation coefficient. Table 1 shows the shift coefficients of each level of the F-shift transformation.
After calculating the coefficient value of each node, a shift error tree is formed. The reconstruction value of each leaf node can be computed using:
d ^ i ( S ) = s j p a t h ( d i ) δ i j s j ,
where d ^ i ( S ) is the reconstructed data from the synopsis S. We set δ i j = + 1 when d i is located on the left subtree of s j and set δ i j = 1 when d i is located on the right subtree of s j . S is the set of shift coefficients. p a t h ( d i ) is the node set that is located on the path from the root node to d i (not including d i ).

2.2. Two Dimensional F-Shift Transformation (TDFS)

The core of the TDFS [27] is to alternately do one-step F-shift transformations in each dimension. We can obtain the low-frequency and high-frequency components following a one-step F-shift transformation. Specifically, the TDFS first performs the a one-step F-shift transformation on each row. After that, we can get the low-frequency component, which is composed of the updated data range. Similarly, the same transformation should be performed on each column of the updated low-frequency component. The above steps of performing a one-step F-shift transformation to rows and columns, respectively, are called the first-level TDFS. Iteratively, the second-level TDFS, third-level TDFS, and so on, can be obtained until only one data range remains. Finally, we usually take the average of the last data range as the approximate value.
Figure 2 is an example of the TDFS. For the resolution of a 4 × 4 image, we need to perform 2 levels of transformation to complete the TDFS. Figure 2a is the original data array, and Δ = 2 . Figure 2b,c are the results of the first-level TDFS and the second-level TDFS, respectively. The shaded parts of the figures show the updated low-frequency component. Figure 2d is the final compression result. The shaded part is the approximate value.

2.3. Contrast-Limited Adaptive Histogram Equalization (CLAHE)

The image histogram can show the pixel distribution of an image. By redistributing the distribution of the image histogram, the image contrast can be changed. Histogram equalization is actually a mapping transformation of the gray level of the original image, which can enlarge the dynamic range of the pixel gray value. Thus, the image contrast can be enhanced.
Suppose the grayscale range of an image is [0,n−1], with a gray level of m . Let p r ( r ) be the gray-level probability density function (PDF). Then, the probability of the kth gray level is [28]:
p r ( r k ) = n k n ,
where k = 0 ,   1 ,   2 ,   ,   m 1 and r k stands for the kth grayscale. Then, the cumulative distribution function (CDF) T ( r k ) can be expressed as:
g = T ( r k ) = j = 0 k p r ( r j ) = j = 0 k n j n ,
where 0 g 1 .
Operating on the entire image as an image block is called global histogram equalization. Since the grayscale changes in different areas of a picture are different, if the global histogram is used, the local changes of the image will be ignored. The equalization of image regions is an adaptive histogram equalization (AHE) algorithm, which combines the advantages of global histogram equalization and considers the local contrast. This algorithm divides the image into several small regions first, then the CDF and image histogram are calculated from each small region, and finally, HE is performed on the pixels of each small region. Unfortunately, AHE tends to over-amplify noise in the relatively uniform regions of the image. To solve this problem, the CLAHE method was proposed. It has two main characteristics: on one hand, it is a method to limit the distribution of histograms to prevent excessive enhancement of noise points, while on the other hand, it uses interpolation to accelerate the histogram equalization. The steps of CLAHE [13] are:
(1) Split the input image into continuous and non-overlapping regions. The region size is generally set to 8 × 8 .
(2) Get the histogram of each region and use the threshold to clip the histogram. The CLAHE algorithm achieves the goal of limiting the magnification by clipping the histogram with a pre-defined threshold before calculating the CDF. This also limits the slope of the transformation function.
(3) Reallocate the pixel values, and distribute the clipped pixel values evenly below the histogram. Figure 3a,b are the histogram distribution before and after clipping, respectively.
(4) Perform a local histogram equalization on all regions.
(5) Use a linear interpolation for pixel value reconstruction. Suppose the gray value of the sample point P is s, and the new gray value after a linear interpolation is s’. The sample points of its surrounding regions are P1, P2, P3, and P4, and the gray-level mappings of s are g P 1 ( s ) , g P 2 ( s ) , g P 3 ( s ) , and g P 4 ( s ) , respectively. For the pixels in the corners, the new gray value is equal to the gray-level mapping of s of this region. For example:
s = g P 1 ( s ) .
For the pixels on the edges, the new gray value is the interpolation of the gray-level mapping of s of the two samples of the surrounding regions. For example:
s = ( 1 α ) g P 1 ( s ) + α g P 2 ( s ) .
For the pixels of the center of the image, the new gray value is the interpolation of the gray-level mapping of s of the four samples of the surrounding regions. For example:
s = ( 1 β ) ( ( 1 α ) g P 1 ( s ) + α g P 2 ( s ) ) + β ( ( 1 α ) g P 3 ( s ) + α g P 4 ( s ) ) ,
where α and β are the normalized distances with respect to the point P1.

3. Proposed Method

Our smart image enhancement is performed in the process of image decompression. Therefore, image compression is a pre-required phase. The image compression method uses the TDFS [27], which is the two-dimensional F-shift transformation. In order to obtain a compromise between the compression effect and the image enhancement effect, we only performed the first-level TDFS on the image. The later compression uses a non-standard two-dimensional wavelet transform (NSTW) [4].
Figure 4 shows the image compression process we performed before the image enhancement. In this example, the error bound Δ was still set to 2. Figure 4a shows the original image pixels with a resolution of 4 × 4 . Figure 4b is the result obtained after performing the first-level TDFS on the original data. Figure 4c is the approximation values of the low-frequency component shown in Figure 4b. Here, the approximate value of each point was the average of the corresponding data range, which is shown in the shaded part of Figure 4c. Figure 4d is the result after performing NSTW on the approximate part of Figure 4c. It should be noted that when we performed the NSTW, the detailed coefficients were obtained by dividing 2, instead of dividing by 2 . This was because we wanted to get coefficient values of the same order as the TDFS coefficients.
After obtaining the compressed data, the steps of our enhanced method were as follows:
Step 1. Adjust the first coefficient s 00 of the synopsis S using the adaptive coefficient adjustment formula. Note that, as shown in Figure 4d, the wavelet synopsis S was a set of the non-zero values and the first coefficient s 00 was 5.375.
Step 2: Incompletely decompress the synopsis S and enhance the low-frequency component. Note that here we decompressed the synopsis to the second-to-last level rather than the final level.
Step 3: Complete the decompression and further enhancement. In this step, the synopsis was decompressed to the final level.
Figure 5 shows the flowchart of the proposed method.

3.1. Adaptive Coefficient Adjustment

During the decompression, the first step was to adjust the image brightness to a suitable brightness such that the overall brightness of the image was not too bright or too dark. In our previous work [29], as well as in our current work, we carried out experiments on medical images and ordinary images. We first evaluated 120 images subjectively and divided them into three types: underexposed images, moderately exposed images, and overexposed images. Through the experiment, we found that when the cut-off values were 90 and 150, the classification accuracy was higher than 85 and 170 (that is, equally dividing 0 to 255 into three bins). Therefore, through experimental experience, we chose 90 and 150 as the cut-off points for an underexposed image and an overexposed image.
According to the properties of the previous compression scheme, the first coefficient of the wavelet synopsis essentially represents the mean of the gray value of the original image. Therefore, it is a representation of the image brightness. In this way, the image brightness can be adjusted through the first coefficient s 00 . The adaptive coefficient adjustment equation is:
s 00 = λ s 00 ,
λ = { 1 + 90 - s 00 128 , 0 s 00 < 90 1 , 90 s 00 150 1 s 00 150 128 , 150 < s 00 255
where λ is the adjustment factor and s 00 is the updated approximation coefficient.
Remark 1: If the value of s 00 is changed, the image gray value will be affected when decompressed, e.g., given an increment μ to s 00 , then each pixel will be increased by μ .
From the data reconstruction Equation (6), we can see that s 00 has a global role in data reconstruction. The reconstruction of each pixel needs to add the value of s 00 . Therefore, if we increase s 00 by μ , then each pixel will be increased by μ .
Remark 2: The transformation of the image gray value caused by the change of s 00 does not change the high-frequency components.
The change of s 00 only affects the reconstructed value of the original data, and the coefficients of the internal nodes of the error tree will not be affected. Therefore, the transformation of the image gray value caused by the change of s 00 does not change the high-frequency components.

3.2. Incomplete Decompression and Enhancing the Low-Frequency Component

Suppose the original image data size is n × n , where n = 2 m . Therefore, a complete compression of this image requires m levels of transformation. After adjusting the first coefficient s 00 , we need to decompress the image. To alleviate the blocking effect, we chose to enhance the decompressed synopsis when decompressing to a level of m 1 (the second-to-last level).
When the synopsis is decompressed to this level, the whole frame and main information of the image are contained in the low-frequency component. Then, CLAHE can be used to enhance the low-frequency component. This is the first time to use CLAHE to achieve the enhancement effect. Usually, the noise information exists in the high-frequency component; therefore, we kept the high-frequency component unchanged in this step to suppress the noise signal.

3.3. Complete Decompression and Further Enhancement

After the above steps, we will get the updated low-frequency component and the unchanged high-frequency component. At this point, only the last level of decompression is needed to get the decompressed image. After getting the reconstructed image, CLAHE is needed to enhance the image to make its detail more abundant.

4. Experimental Results

4.1. Impact of the Error Bound on the Enhancement and Compression Results

Figure 6, Figure 7 and Figure 8 show the enhanced images under different error bounds with 150 < s 00 255 , 90 s 00 150 , and 0 s 00 < 90 . Theoretically, as the error bound increases, the high-frequency component will lose more and more detailed information. From the experimental results, we see that as the error bound increased, the brightness, contrast, and details of the image became weaker. For the images of Figure 6 and Figure 7, different error bounds had little effect on the image quality, while in the image of Figure 8, the image quality was very different and the margin was different. It can be seen from the results that for images with lower brightness, the enhancement effect was greatly affected by the error bound. In order to obtain a higher quality image, the error bound needed to be smaller.
Figure 9 shows the trend of the compression effect and image quality under different error bounds in the images in Figure 6a, Figure 7a, and Figure 8a. The step of the error bound was 1. Here, the peak signal-to-noise ratio (PSNR) was used to evaluate the enhancement effect, and the data reduction rate was used to measure the compression effect. The reduction rate r is defined as:
r = o r i g i n a l _ d a t a _ s i z e s y n o p s e s _ s i z e o r i g i n a l _ d a t a _ s i z e ,
where s y n o p s e s _ s i z e represents the size of the synopsis S obtained using our method and o r i g i n a l _ d a t a _ s i z e represents the size of the original image. We chose the enhanced image of Δ = 0 as a reference image for calculating the PSNR. The selection of this parametric setting was due to the loss of details being the least when Δ = 0 . From Figure 9, we found that as the error bound increased, the PSNR of the image decreased and the reduction rate increased. We also found that images with a low brightness were more sensitive to the error bound. We found that a high compression effect and high image quality could not be achieved at the same time. Therefore, an appropriate error bound should be chosen to reach a compromise between the compression effect and image quality. In practical applications, this error bound is obtained through experience.

4.2. Comparison of the Enhancement Effect of Different Methods

In this section, we compared our method with CLAHE [13] and CLAHE_DWT [25]. Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15 show the images enhanced with different methods at different brightness levels. More concretely, Figure 10 and Figure 11 show the enhanced images with 150 < s 00 255 , Figure 12 and Figure 13 show the enhanced images with 90 s 00 150 , and Figure 14 and Figure 15 show the enhanced images with 0 s 00 < 90 .
The error bound was set to 5, 5, 5, 5, 3, and 3 for Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, respectively. According to Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, we observed that the overall brightness of these images was greatly improved. It can be seen from the results that the image visual effects obtained using our method were better than CLAHE [13] and CLAHE_DWT [25], and the better visual effect was not only reflected in the image contrast but also in the image details.
Generally, there are two types of image quality evaluation methods: one is the full-reference image quality evaluation (e.g., PSNR, structural similarity index measurement (SSIM)), and the other is the non-reference image quality evaluation (e.g., mean, standard deviation (SD), entropy, average gradient (AG)). Because there was no ideal reference image, the non-reference image quality evaluation was more suitable for the comparison of image enhancement effects. Table 2 shows the mean [30,31], SD [30,32,33], entropy [31,32,34], and AG [31,32] of the images obtained using the above methods, which represent the average brightness, contrast, detail richness, and clarity of the image, respectively. The above evaluation parameters can be obtained using the following formula:
Mean = 1 m × n i = 0 m 1 j = 0 n 1 d i j ,
SD = 1 m × n - 1 i = 0 m - 1 j = 0 n - 1 | d i j Mean | 2 ,
Entropy = k = 0 255 p ( k ) × log 2 p ( k ) ,
AG = 1 m × n i = 0 m 1 j = 0 n 1 ( i ) 2 + ( j ) 2 2 ,
where Mean is the mean of the image. m is the number of rows and n is the number of columns. d i j is the gray value of each pixel. SD is the standard deviation of the image. Entropy is the entropy of the image. p ( k ) stands for the kth gray level probability. i and j stand for the gradient of the horizontal and vertical direction, where i = d i j d ( i 1 ) j and j = d i j d i ( j 1 ) , and i and j are the row and the column numbers.
In Table 2, the image brightness can be altered to a suitable range based on our method. We show the best results in bold, and the best value of Mean is the maximum value in the range of [90, 150]. In most cases, the SD, entropy, and AG of the algorithm are greater than the other two algorithms and the original image, which was consistent with our subjective feelings. Therefore, both subjective and objective results show that our algorithm can get better image contrasts and more image details. In addition, our algorithm can also achieve data compression. Through calculations for the images in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, the reduction rates were 76.47%, 43.19%, 55.88%, 59.73%, 71.95%, and 75.03%, respectively.

4.3. Method Validation

To further prove the effectiveness of our method, we propose several schemes to verify it.
Scheme 1: This scheme can be simply described as s 00 adjustment plus CLAHE for the low-frequency component.
Specifically, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed to the second-to-last level. After performing CLAHE for the decompressed low-frequency component, the synopsis is decompressed completely and the final enhanced image is obtained.
Scheme 2: This scheme can be simply described as s 00 adjustment plus CLAHE for a completely decompressed image.
Specifically, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed completely and the final enhanced image is obtained by using CLAHE for the decompressed image.
Scheme 3: This scheme can be simply described as two-stage CLAHE for the original image.
Specifically, one CLAHE enhancement is used after another CLAHE enhancement of the original image.
Scheme 4: This scheme can be simply described as s 00 adjustment plus two-stage CLAHE for the low-frequency component.
Specifically, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed to the second-to-last level. After that, a two-stage CLAHE enhancement is used for the low-frequency component. Lastly, the enhanced image is obtained using the complete decompression.
Scheme 5: This scheme can be simply described as s 00 adjustment plus two-stage CLAHE for a complete decompressed image.
Specifically, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed completely. After that, a two-stage CLAHE enhancement is used for the complete decompressed image. Finally, the enhanced image can be obtained.
Our method: Our method can be simply described as s 00 adjustment plus CLAHE for the low-frequency component plus CLAHE for the complete decompressed image.
That is, during the decompression process, s 00 is adjusted adaptively; then, the synopsis is decompressed to the second-to-last level. After the CLAHE enhancement for the decompressed low-frequency component, the synopsis is decompressed completely. Finally, the CLAHE enhancement is used again for the complete decompressed image and the enhanced image can be obtained.
Figure 16, Figure 17, Figure 18 and Figure 19 are comparisons between the above schemes and our method. From the experimental results of schemes 1 and 2, we found that compared with the scheme of using CLAHE directly on the original image, the s 00 adjustment and CLAHE during the decompression play positive roles in the image enhancement. From the results of schemes 3–5 and our method, we found that using CLAHE twice can get a better enhancement effect than schemes 1–2 and CLAHE. For the schemes using CLAHE twice, we found that the image enhancement effect obtained using our method was the best. For scheme 4, using CLAHE twice to enhance the low-frequency component may reduce the effect of the high-frequency component, which makes the reconstructed image not clear enough. For scheme 5, using CLAHE twice for the decompressed image could not effectively utilize the advantages of separately processing the low-frequency and high-frequency components. This was because the low-frequency component usually has very large information entropy, while the high-frequency component usually has a smaller information entropy. Excessive enhancement of the low-frequency component or too little enhancement will reduce the enhancement effect. Table 3 shows the objective evaluation indicators of the comparison schemes. The best results are shown in bold. It can be seen from Table 3 that our method can achieve the best results on more indicators than other schemes. Those objective indicators also conformed to our subjective feelings. Therefore, our method demonstrated its advantages both subjectively and objectively.

5. Conclusions

In this study, we developed a smart image enhancement method during decompression and designed a variety of schemes to verify the effectiveness of our method. We obtained the compressed data by using the first-level TDFS and NSTW for the original image. During decompression, the overall brightness of the image was adaptively adjusted using the first coefficient s 00 . To improve the image contrast and enrich the image details, we implemented the CLAHE enhancement for the low-frequency component and reconstructed images during decompression. The results showed that this enhancement strategy could effectively utilize the advantages of separately processing the low-frequency and high-frequency components and balance the enhancement effect. The experimental results showed that compared with the methods of CLAHE [13] and CLAHE_DWT [25], our scheme could further improve the image contrast while making the image details more abundant. The proposed methods provided in this paper further extend our previous work [26], and a comprehensive model was suggested for smart image enhancement. This smart image enhancement method can be further applied to intelligent image processing applications, such as face recognition and smart system inspection. Our future work will focus on applying the above-mentioned smart image enhancement method to smart city surveillance and rapid responses to potential pandemic outbreaks.

Author Contributions

Algorithm design, theoretical analysis, and writing the manuscript, R.F. and X.L.; algorithm analysis and optimization, S.L.; design and analysis of the experiment, T.L.; guidance of the theoretical analysis and algorithm optimization, H.L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation (grant number 61572022), the Sciences and Technology Project of Hebei Academy of Sciences (grant numbers 19607 and 18607), Zhejiang Natural Science Fund (grant number LY19F030010), Zhejiang Philosophy and Social Sciences Fund (grant number 20NDJC216YB), Ningbo Innovation Team (no. 2016C11024), Ningbo Natural Science Fund (no. 2019A610083), Ningbo Covid-19 and Education Special Fund (no. 2020YQZX137), and Zhejiang Provincial Education and Science Scheme 2020 (Post-COVID19 fund). This research is supported by the Centre for Smart Grid and Information Convergence (CeSGIC) at Xian Jiaotong-Liverpool University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hsia, C.H.; Yang, J.H.; Chiang, J.S. Complexity reduction method for ultrasound imaging enhancement in tetrolet transform domain. J. Supercomput. 2020, 76, 1438–1449. [Google Scholar] [CrossRef]
  2. Xia, K.J.; Wang, J.Q.; Cai, J. A novel medical image enhancement algorithm based on improvement correction strategy in wavelet transform domain. Cluster Comput. 2019, 22, 10969–10977. [Google Scholar] [CrossRef]
  3. Pang, C.; Zhang, Q.; Zhou, X.; Hansen, D.; Wang, S.; Maeder, A. Computing unrestricted synopses under maximum error bound. Algorithmica 2013, 65, 1–42. [Google Scholar] [CrossRef]
  4. Zhang, Q.; Pang, C.; Hansen, D. On multidimensional wavelet synopses for maximum error bounds. In International Conference on Database Systems for Advanced Applications; Springer: Berlin/Heidelberg, Germany, 2009; pp. 646–661. [Google Scholar]
  5. Rahman, S.; Rahman, M.M.; Abdullah-Al-Wadud, M.; Al-Quaderi, G.D.; Shoyaib, M. An adaptive gamma correction for image enhancement. EURASIP J. Image Video Process. 2016, 1, 1–13. [Google Scholar] [CrossRef] [Green Version]
  6. Lim, S.H.; Isa, N.A.M.; Ooi, C.H.; Vin Toh, K.K. A new histogram equalization method for digital image enhancement and brightness preservation. Signal Image Video Process. 2015, 9, 675–689. [Google Scholar] [CrossRef]
  7. Zhuang, L.; Guan, Y. Image enhancement via subimage histogram equalization based on mean and variance. Comput. Intell. Neurosci. 2017, 1–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Arora, S.; Agarwal, M.; Kumar, V.; Gupta, D. Comparative study of image enhancement techniques using histogram equalization on degraded images. Int. J. Eng. Technol. 2018, 7, 468–471. [Google Scholar] [CrossRef]
  9. Rahman, Z.U.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging. 2004, 13, 100–110. [Google Scholar]
  10. Lee, S. An efficient content-based image enhancement in the compressed domain using retinex theory. IEEE Trans. Circuits Syst. Video Technol. 2007, 17, 199–213. [Google Scholar] [CrossRef]
  11. Anand, S.; Gayathri, S. Mammogram image enhancement by two-stage adaptive histogram equalization. Optik 2015, 126, 3150–3152. [Google Scholar] [CrossRef]
  12. Sargun, S.; Rana, S.B. Performance evaluation of HE, AHE and fuzzy image enhancement. Int. J. Comput. Appl. 2015, 122, 14–19. [Google Scholar] [CrossRef]
  13. Zuiderveld, K. Contrast limited adaptive histogram equalization. Graph. Gems 1994, 474–485. [Google Scholar]
  14. Singh, P.; Mukundan, R.; Ryke, R.D. Feature enhancement in medical ultrasound videos using contrast-limited adaptive histogram equalization. J. Digital Imaging 2019, 1–13. [Google Scholar] [CrossRef]
  15. Wang, J.W.; Le, N.T.; Lee, J.S.; Wang, C.C. Color face image enhancement using adaptive singular value decomposition in fourier domain for face recognition. Pattern Recognit. 2016, 57, 31–49. [Google Scholar] [CrossRef]
  16. Makandar, A.; Halalli, B. Image enhancement techniques using highpass and lowpass filters. Int. J. Comput. Appl. 2015, 109, 21–27. [Google Scholar] [CrossRef]
  17. Kuo, C.M.; Yang, N.C.; Liu, C.S.; Tseng, P.Y.; Chang, C.K. An effective and flexible image enhancement algorithm in compressed domain. Multimed. Tools Appl. 2016, 75, 1177–1200. [Google Scholar] [CrossRef]
  18. Sharma, A.; Khunteta, A. Satellite image enhancement using discrete wavelet transform, singular value decomposition and its noise performance analysis. In Proceedings of the 2016 International Conference on Micro-Electronics and Telecommunication Engineering (ICMETE), Ghaziabad, India, 22–23 September 2016; pp. 594–599. [Google Scholar]
  19. Hsieh, C.T.; Lai, E.; Wang, Y.C. An effective algorithm for fingerprint image enhancement based on wavelet transform. Pattern Recognit. 2003, 36, 303–312. [Google Scholar] [CrossRef]
  20. Kim, S.; Kang, W.; Lee, E.; Paik, J. Wavelet-domain color image enhancement using filtered directional bases and frequency-adaptive shrinkage. IEEE Trans. Consum. Electron. 2010, 56, 1063–1070. [Google Scholar] [CrossRef]
  21. Uhring, W.; Jung, M.; Summ, P. Image processing provides low-frequency jitter correction for synchroscan streak camera temporal resolution enhancement. Opt. Metrol. Prod. Eng. 2004, 5457, 245–252. [Google Scholar]
  22. Yang, J.; Wang, Y.; Xu, W.; Dai, Q. Image and video denoising using adaptive dual-tree discrete wavelet packets. IEEE Trans. Circuits Syst. Video Technol. 2009, 19, 642–655. [Google Scholar] [CrossRef]
  23. Shahane, P.R.; Mule, S.B.; Ganorkar, S.R. Color image enhancement using discrete wavelet transform. Digital Image Process. 2012, 4, 1–5. [Google Scholar]
  24. Zhang, C.; Ma, L.N.; Jing, L.N. Mixed frequency domain and spatial of enhancement algorithm for infrared image. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, Sichuan, China, 29–31 May 2012; pp. 2706–2710. [Google Scholar]
  25. Huang, L.; Zhao, W.; Wang, J.; Sun, Z. Combination of contrast limited adaptive histogram equalisation and discrete wavelet transform for image enhancement. IET Image Process. 2015, 9, 908–915. [Google Scholar]
  26. Fan, R.; Li, X.; Zhao, H.; Zhang, H.; Pang, C.; Wang, J. Image enhancement method in decompression based on F-shift transformation. In Communications in Computer and Information Science, Proceedings of the 6th International Conference, ICDS 2019, Ningbo, China, 15–20 May 2019; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1179, pp. 232–241. [Google Scholar]
  27. Li, X.; Fan, R.; Zhang, H.; Li, T.; Pang, C. Two-dimensional wavelet synopses with maximum error bound and its application in parallel compression. J. Intell. Fuzzy Syst. 2019, 37, 3499–3511. [Google Scholar] [CrossRef]
  28. Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.A.A.; Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
  29. Li, X.; Li, T.; Zhao, H.; Dou, Y.; Pang, C. Medical image enhancement in F-shift transformation domain. Health Inf. Sci. Syst. 2019, 7, 1–8. [Google Scholar] [CrossRef]
  30. Lee, H.S.; Moon, S.W.; Eom, I.K. Underwater image enhancement using successive color correction and superpixel dark channel prior. Symmetry 2020, 12, 1220. [Google Scholar] [CrossRef]
  31. Li, B.; Xie, W. Image denoising and enhancement based on adaptive fractional calculus of small probability strategy. Neurocomputing 2016, 175, 704–714. [Google Scholar] [CrossRef]
  32. Qu, Z.; Xing, Y.; Song, Y. Image enhancement based on pulse coupled neural network in the nonsubsample shearlet transform domain. Math. Prob. Eng. 2019, 2019, 1–11. [Google Scholar] [CrossRef] [Green Version]
  33. Chang, D.C.; Wu, W.R. Image contrast enhancement based on a Histogram transformation of local standard deviation. IEEE Trans. Med. Imaging 1998, 17, 518–531. [Google Scholar] [CrossRef] [Green Version]
  34. Zhuang, L.; Guan, Y. Adaptive Image enhancement using entropy-based subhistogram equalization. Comput. Intell. Neurosci. 2018. [Google Scholar] [CrossRef] [Green Version]
Figure 1. F-shift error tree T with error bound Δ = 2.
Figure 1. F-shift error tree T with error bound Δ = 2.
Electronics 09 01374 g001
Figure 2. An example of a two–dimensional F-shift transformation (TDFS): (a) original data array, (b) first-level TDFS, (c) second-level TDFS, and (d) computing the approximation.
Figure 2. An example of a two–dimensional F-shift transformation (TDFS): (a) original data array, (b) first-level TDFS, (c) second-level TDFS, and (d) computing the approximation.
Electronics 09 01374 g002
Figure 3. Histogram distribution: (a) before clipping and (b) after clipping.
Figure 3. Histogram distribution: (a) before clipping and (b) after clipping.
Electronics 09 01374 g003
Figure 4. The compression process before the enhancement: (a) original data array, (b) first-level TDFS, (c) compute the approximation, and (d) non-standard two-dimensional wavelet transform (NSTW) for the approximation.
Figure 4. The compression process before the enhancement: (a) original data array, (b) first-level TDFS, (c) compute the approximation, and (d) non-standard two-dimensional wavelet transform (NSTW) for the approximation.
Electronics 09 01374 g004
Figure 5. The steps of our enhancement method. CLAHE: Contrast-limited adaptive histogram equalization.
Figure 5. The steps of our enhancement method. CLAHE: Contrast-limited adaptive histogram equalization.
Electronics 09 01374 g005
Figure 6. The enhanced image under different error bounds: (a) original image, (b) Δ = 0, (c) Δ = 2, (d) Δ = 5, and (e) Δ = 8.
Figure 6. The enhanced image under different error bounds: (a) original image, (b) Δ = 0, (c) Δ = 2, (d) Δ = 5, and (e) Δ = 8.
Electronics 09 01374 g006
Figure 7. The enhanced image under different error bounds: (a) original image, (b) Δ = 0, (c) Δ = 2, (d) Δ = 5, and (e) Δ = 8.
Figure 7. The enhanced image under different error bounds: (a) original image, (b) Δ = 0, (c) Δ = 2, (d) Δ = 5, and (e) Δ = 8.
Electronics 09 01374 g007
Figure 8. The enhanced image under different error bounds: (a) original image, (b) Δ = 0 , (c) Δ = 2 , (d) Δ = 5 , and (e) Δ = 8 .
Figure 8. The enhanced image under different error bounds: (a) original image, (b) Δ = 0 , (c) Δ = 2 , (d) Δ = 5 , and (e) Δ = 8 .
Electronics 09 01374 g008
Figure 9. The trend of the image compression effect and image quality under different error bounds. (a) the result of Figure 6a, (b) the result of Figure 7a, and (c) the result of Figure 8a. PSNR: Peak signal-to-noise ratio.
Figure 9. The trend of the image compression effect and image quality under different error bounds. (a) the result of Figure 6a, (b) the result of Figure 7a, and (c) the result of Figure 8a. PSNR: Peak signal-to-noise ratio.
Electronics 09 01374 g009
Figure 10. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Figure 10. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Electronics 09 01374 g010
Figure 11. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Figure 11. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Electronics 09 01374 g011
Figure 12. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Figure 12. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Electronics 09 01374 g012
Figure 13. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Figure 13. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Electronics 09 01374 g013
Figure 14. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Figure 14. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Electronics 09 01374 g014
Figure 15. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Figure 15. Comparison of the images enhanced using different methods: (a) original image, (b) CLAHE [13], (c) CLAHE_DWT [25], and (d) our method.
Electronics 09 01374 g015
Figure 16. Comparison results of different schemes: (a) original image, (b) CLAHE, (c) scheme 1, (d) scheme 2, (e) scheme 3, (f) scheme 4, (g) scheme 5, and (h) our method.
Figure 16. Comparison results of different schemes: (a) original image, (b) CLAHE, (c) scheme 1, (d) scheme 2, (e) scheme 3, (f) scheme 4, (g) scheme 5, and (h) our method.
Electronics 09 01374 g016
Figure 17. Comparison results of different schemes: (a) original image, (b) CLAHE, (c) scheme 1, (d) scheme 2, (e) scheme 3, (f) scheme 4, (g) scheme 5, and (h) our method.
Figure 17. Comparison results of different schemes: (a) original image, (b) CLAHE, (c) scheme 1, (d) scheme 2, (e) scheme 3, (f) scheme 4, (g) scheme 5, and (h) our method.
Electronics 09 01374 g017
Figure 18. Comparison results of different schemes: (a) original image, (b) CLAHE, (c) scheme 1, (d) scheme 2, (e) scheme 3, (f) scheme 4, (g) scheme 5, and (h) our method.
Figure 18. Comparison results of different schemes: (a) original image, (b) CLAHE, (c) scheme 1, (d) scheme 2, (e) scheme 3, (f) scheme 4, (g) scheme 5, and (h) our method.
Electronics 09 01374 g018
Figure 19. Comparison results of different schemes: (a) original image, (b) CLAHE, (c) scheme 1, (d) scheme 2, (e) scheme 3, (f) scheme 4, (g) scheme 5, and (h) our method.
Figure 19. Comparison results of different schemes: (a) original image, (b) CLAHE, (c) scheme 1, (d) scheme 2, (e) scheme 3, (f) scheme 4, (g) scheme 5, and (h) our method.
Electronics 09 01374 g019
Table 1. F-shift decomposition of a four-pixel image.
Table 1. F-shift decomposition of a four-pixel image.
ResolutionLow-Frequency ComponentHigh-Frequency Component
8{[5,9], [4,8], [−1,3], [6,10],[3,7], [2,6], [0,4], [8,12]}------
4{[5,8], [2.5,6.5], [3,6], [4,8]}{0,3.5,0,−4}
2{[5,6.5], [4,6]}{0,0}
1{[5,6]}{0}
Table 2. Comparison of the evaluation parameters of different methods.
Table 2. Comparison of the evaluation parameters of different methods.
ImagesMethodsMeanSDEntropyAG
Figure 10Original175.3322.124.002.32
CLAHE [13]177.3124.005.733.66
CLAHE_DWT [25]174.0425.085.743.36
Our method145.5338.757.2310.42
Figure 11Original172.1655.836.9815.25
CLAHE [13]156.1567.027.7121.90
CLAHE_DWT [25]161.7369.027.3218.99
Our method134.3071.257.9522.90
Figure 12Original132.3827.065.505.00
CLAHE [13]145.4839.087.1610.15
CLAHE_DWT [25]165.8349.757.5312.25
Our method131.3270.147.9820.10
Figure 13Original95.3953.786.127.01
CLAHE [13]107.2759.317.769.79
CLAHE_DWT [25]127.6367.717.9512.38
Our method125.4471.517.9914.92
Figure 14Original32.5225.115.432.55
CLAHE [13]55.1639.746.527.12
CLAHE_DWT [25]94.5753.487.028.68
Our method93.9248.867.4812.95
Figure 15Original20.1219.685.631.67
CLAHE [13]51.6742.006.954.40
CLAHE_DWT [25]75.2655.247.395.61
Our method92.6061.727.727.90
Table 3. Comparison of evaluation parameters of different methods.
Table 3. Comparison of evaluation parameters of different methods.
ImagesMethodsMeanSDEntropyAG
Figure 16Original175.3322.124.002.32
CLAHE [13]177.3124.005.733.66
Scheme 1146.8625.656.184.44
Scheme 2147.1522.855.813.68
Scheme 3167.6632.516.728.64
Scheme 4145.0040.177.2910.84
Scheme 5145.7331.826.798.92
Our method141.5040.677.3411.60
Figure 17Original132.3827.065.505.00
CLAHE [13]145.4839.087.1610.15
Scheme 1143.2945.907.4811.05
Scheme 2140.9846.347.5113.70
Scheme 3136.7963.187.9120.96
Scheme 4132.6569.807.9719.77
Scheme 5130.4970.877.9824.66
Our method130.3570.887.9920.52
Figure 18Original20.1219.685.631.65
CLAHE [13]51.6742.006.954.39
Scheme 157.3140.576.914.09
Scheme 257.7040.726.924.24
Scheme 391.1662.227.747.98
Scheme 494.0461.137.737.94
Scheme 594.1061.057.738.17
Our method93.6760.887.748.32
Figure 19Original14.0739.754.132.28
CLAHE [13]29.2547.365.204.43
Scheme 133.5247.214.913.90
Scheme 233.8546.755.134.15
Scheme 345.2055.366.245.83
Scheme 447.6356.946.126.00
Scheme 548.0455.586.296.30
Our method48.2457.096.356.10

Share and Cite

MDPI and ACS Style

Fan, R.; Li, X.; Lee, S.; Li, T.; Zhang, H.L. Smart Image Enhancement Using CLAHE Based on an F-Shift Transformation during Decompression. Electronics 2020, 9, 1374. https://doi.org/10.3390/electronics9091374

AMA Style

Fan R, Li X, Lee S, Li T, Zhang HL. Smart Image Enhancement Using CLAHE Based on an F-Shift Transformation during Decompression. Electronics. 2020; 9(9):1374. https://doi.org/10.3390/electronics9091374

Chicago/Turabian Style

Fan, Ruiqin, Xiaoyun Li, Sanghyuk Lee, Tongliang Li, and Hao Lan Zhang. 2020. "Smart Image Enhancement Using CLAHE Based on an F-Shift Transformation during Decompression" Electronics 9, no. 9: 1374. https://doi.org/10.3390/electronics9091374

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop