Low-Light Image Enhancement Based on Quasi-Symmetric Correction Functions by Fusion

: Sometimes it is very di ﬃ cult to obtain high-quality images because of the limitations of image-capturing devices and the environment. Gamma correction (GC) is widely used for image enhancement. However, traditional GC perhaps cannot preserve image details and may even reduce local contrast within high-illuminance regions. Therefore, we ﬁrst deﬁne two couples of quasi-symmetric correction functions (QCFs) to solve these problems. Moreover, we propose a novel low-light image enhancement method based on proposed QCFs by fusion, which combines a globally-enhanced image by QCFs and a locally-enhanced image by contrast-limited adaptive histogram equalization (CLAHE). A large number of experimental results showed that our method could signiﬁcantly enhance the detail and improve the contrast of low-light images. Our method also has a better performance than other state-of-the-art methods in both subjective and objective assessments. we propose an e ﬀ ective multi-scale fusion method to combine a globally-enhanced image by QCFs and a local enhanced image by CLAHE. Experiment results from many images showed that our method can signiﬁcantly improve the contrast of low-light images and successfully achieve a balanced detail enhancement for dark regions and bright regions. From the perspective of subjective evaluation and objective evaluation, our proposed method has a better performance than other state-of-the-art methods. As future work, we will try to apply the proposed QCFs for image dehazing, underwater image enhancement, and image enhancement in poor visibility conditions and so on. We will also explore applications of QCFs in automatic and on-line detection system for product defects and real-time intelligent monitoring for production lines based on machine vision.


Introduction
High-quality images have found many applications in multimedia and computer vision applications [1,2]. However, due to the limitations of image acquisition technology, imaging environment, and other factors, it is sometimes very difficult to obtain high-quality images and it is almost impossible to obtain images with low illuminance all the time. Images taken under extreme weather conditions, underwater or at night often have low-visibility and blurred details, and their quality is greatly degraded. Therefore, it is necessary to enhance low-light images in order to satisfy our demand. Researchers from academia and industry have been working on various image-processing technologies. Many image enhancement methods have been proposed to enhance the degraded images in daily life, such as underwater images [3], foggy images [4], and nighttime images [5]. In modern industrial production areas, these techniques have also found more and more applications. In [6], a novel optical defect detection and classification system is proposed for real-time inspection. In [7], a novel defect extraction and classification scheme for mobile phone screens, based on machine vision, is proposed. Another interesting application is the surface quality inspection of transparent parts [8].
Existing approaches generally fall into two major categories-global enhancement methods and local enhancement methods. Global enhancement methods aim to process all image pixels in the same way, regardless of spatial distribution. Global enhancement methods, which are based on logarithm [9], power-law [10], or gamma correction [11], are commonly used for low-quality images. However, sometimes these methods cannot output desired and gratifying results. Histogram equalization (HE) [12] is a great way to improve the contrast of the input image by adjusting pixel values according to the cumulative distribution function. However, it may lead to over-enhancement in some regions with low contrast. To overcome this defect, in [13] an enhancement method is proposed using layered difference representation (LDR) of 2D histograms, but the enhanced image looks a bit dark. Traditional gamma correction (GC) [14] maps all pixels by a power exponent function, but sometimes this causes over-correction for the bright regions.
Local enhancement methods take the spatial distribution of pixels into account and usually have a better effect. Adaptive histogram equalization (AHE) achieves a better contrast by optimizing local image contrast [15,16]. By limiting contrast in each small local region, contrast-limited adaptive histogram equalization (CLAHE) [17,18] can overcome the problem of excessive noise amplification of the AHE. In [19], a scheme for adaptive image-contrast enhancement is proposed based on a generalization of histogram equalization by selecting an alternative cumulative distribution function. In [17], a dynamic histogram equalization method is proposed by partitioning the image histogram based on local minima before equalizing them separately. However, these methods often cause artifacts. Retinex theory [20] decomposes an image into scene reflection and illumination, which provide a physical model for image enhancement. However, early Retinex-based algorithms [21][22][23] generate enhanced images by removing estimated illumination so that final images look unnatural. Researchers have found that it is better to compress the estimated illumination rather than removing it [24][25][26][27][28]. The naturalness-preserved enhancement algorithm (NPEA) [24] modifies illumination by the bi-log transformation, and then combines illumination and reflectance. In [26], a new multi-layer model is proposed to extract details based on the NPEA. However, the computation load of these models is very high because of patch-based calculation. In [29], an efficient contrast enhancement method, which is based on adaptive gamma correction with weighting distribution (AGCWD), improves the brightness of dimmed images via traditional GC by incorporating probability distribution of luminance pixels. However, images enhanced by AGCWD look a little dim. In [30], a new color assignment method is derived which can fit a specified, well-behaved target histogram well. In [31], the final illumination map is obtained by imposing a structure prior to the initial estimated initial illumination map and enhancement results can be achieved accordingly. In [32], a camera response model is built to adjust each pixel to the desired exposure. However, the enhanced image looks very bright, perhaps resulting from the model's deviation. Some fusion-based methods can achieve good enhancement results [33,34]. In [33], two input images for fusion are obtained by improving luminance and enhancing contrast from a decomposed illumination. In [34], a multi-exposure fusion framework is proposed which blends synthesized multi-exposure images from the proposed camera response model with the weight matrix for fusion using illumination estimation techniques.
In this paper, we also propose a fusion-based method for enhancing low-light images, which is based on proposed quasi-symmetric correction functions (QCFs). Firstly, we present two couples of QCFs to obtain a globally-enhanced image for the original low-light image. Then, we employ CLAHE [17] on the value channel of the enhanced image just derived to obtain a locally-enhanced image. Finally, we merge them by a proposed multi-scale fusion formula. Our main contributions are as follows: (1) We define two couples of QCFs that can simultaneously achieve an impartial correction within dim regions and dazzling regions for low-light images through the proposed weighting-adding formula. (2) We define three weight maps which are effective for contrast improvement and detail preservation by fusion. In particular, we designed a value weight map, so we could achieve an excellent overall brightness for low-light images. (3) We achieved satisfactory enhancement results for low-light images based on defined QCFs by the proposed multi-scale fusion strategy, which aims to blend a globally-enhanced image and a locally-enhanced image.
The rest of the paper is organized as follows: In Section 2, two couples of QCFs are presented after introducing GC and the preliminary comparison experiment is shown. In Section 3, we propose a low-light image enhancement method based on QCFs by a delicately-designed fusion formula. In Section 4, comparison experimental results of different state-of-the-art methods are provided and their performance assessments are made and discussed. Finally, the conclusion is presented in Section 5.

Quasi-Symmetric Correction Functions (QCFs)
In the following, we will first give a brief introduction for traditional gamma correction and illustrate its shortcoming. Then we will define two couples of QCFs and show their successful results for image enhancement by a preliminary experiment based a simple weighting-adding formula.

Gamma Correction
GC maps the input low contrast image by [14]: where x and y are the normalized input image and output image, respectively; and γ < 1 is a constant which is usually taken as 0.4545. GC has found many applications in image processing. Figure 1 shows an original low-light image and its image enhanced by GC, from which it can be seen that the enhanced image has a better visual effect on the whole. However, its bright areas are also enlarged which makes them look so bright that the enhancement reduces local contrast within high-illuminance regions.

Quasi-Symmetric Correction Functions (QCFs)
To overcome the inherent defects of traditional GC, we designed two couples of quasi-symmetric correction functions (QCFs). The first couple comprise: While the second couple comprise: Exponentiation computation is performed twice for the latter function of either couple of QCFs (2) or (3). Either couple of QCFs can be seen as two quasi-symmetric correction functions compared to the traditional gamma correction function (1).

Weighting-Adding Formula for QCFs
For either couple of QCFs, each pixel of the original image with weak light is processed by either function in the same way and hence the enhanced image is derived by: where y i is the pixel value of transformed image by Equations (2) or (3) and α is a weighting factor which is determined by: where V high is mean of the top 10% value component (V) of the original low-light image in HSV color space and V low is the mean of the bottom 10%. On the one hand, any low-light image has very low contrast in both dark and bright regions, hence here we only take the key minority into consideration.
On the other hand, the distribution for these pixels is relatively concentrated, so their means change little, as has been justified by the number of experiments on various low-light images. Figure 2 shows the enhanced images by different correction methods including traditional GC, AGCWD [29], and our QCFs (2) and (3), respectively. We can easily find that all correction methods can improve the overall brightness. However, the GC and AGCWD approaches cause over-enhancement in bright areas. Overall, the method based on proposed QCFs can better preserve image details. In other words, by weighting-adding QCFs (2) or (3), we can achieve an impartial correction result for low-light images. The preliminary comparison experiment has shown the effectiveness of QCFs for low-light image enhancement. In order to further dig out their potential, we will improve the simple fusion based on the weighting-add method in the following section.

Low-Light Image Enhancement Based on QCFs by Fusion
In this section, we will propose a novel multi-scale fusion method to blend a globally-enhanced image and a locally-enhanced image obtained from original low-light image. In the following, we will describe the three main steps for multi-scale fusion.

Input Images Derived for Fusion
As described in Section 2, QCFs can improve the global contrast without losing detail information. Therefore, we can obtain the first input image (I 1 ) for fusion by mapping the original low-light image according to Equations (4) and (5) based on QCFs (2) or (3). In order to further improve the local contrast and details, we can obtain the second input image (I 2 ) for fusion by transforming I 1 from RGB space into HSV space, then exerting CLAHE on its value component and finally returning back to RGB space. Because the first input image (I 1 ) used for fusion has gained high global visibility and the second input image (I 2 ) has achieved clear local details, the multi-scale fusion-based method will greatly upgrade image quality and hence hopefully have a satisfactory result.

Weight Map Design
The weights for fusion greatly affect the quality of the fused image [4]. In this step, we develop a strategy for weight maps as in our previous work [35]. In order to make the fused image look better, the weight maps are pixel-level. We first normalized two input images. Then their global contrast, local salient feature, and value feature are taken into consideration to design the weight maps for fusion.
In the following, for either input image, I i , for fusion, we denote its contrast weight map, saliency weight map, and value weight map as w i,1 , w i,2 , and w i,3 , respectively.

Contrast Weight Map Design
The contrast weight map (w i,1 ) is designed to estimate the global contrast. In [20], the contrast weight map is calculated by employing Laplacian sharpening, filtering to the L channel of the input image for fusion in Lab color space. In our case, we conducted Laplacian sharpening filtering to the value component in HSV space because it can assign edges and textures higher weights. As a result, it can hopefully make the edges of the fused image more prominent.

Saliency Weight Map Design
The saliency weight map (w i,2 ) is designed to highlight the object within low illumination regions and weaken image background. In [36], the saliency map is obtained by Euclidean distance between the Lab pixel vector in a Gaussian filtered image and the average Lab vector for the input image. However, this method tends to highlight the regions with high luminance values, so it is not suitable for low illumination colorful images. For any image with weak illumination, regions with low pixel values may contain the target object. For this reason, we redefine the saliency weight map as follows: the superscript c ∈ {R, G, B} denotes the RGB channel; and mean{·} denotes mean operator.

Value Weight Map Design
The value weight map (w i,3 ) is designed to enable our fusion approach to adapt to the dim illumination environment of any captured low-light image. Considering the fact that images for fusion have different mean values, we normalize by them and obtain the following value weight map:

Final Normalized Weight Map
The normalized weight map (w i ) is finally obtained by normalizing the above three weights by:

Multi-Scale Fusion
So far, the desired fused image could be obtained by naive blending two input images with the normalized weight maps. However, the naive fusing approach often introduces undesirable artifacts and halos [37]. To overcome these shortcomings, many researchers apply traditional multi-scale fusion technology to avoid halos [5,38]. Methods based on multi-scale fusion can obtain a high-quality fused image by effectively extracting the features of input images. These methods employ Laplace operator to decompose each input image into Laplace pyramids [33] to extract image features, and employ Gaussian operator to decompose each normalized weight map into Gaussian pyramids to smooth the strong transitions. The levels of the Gaussian pyramids and Laplacian pyramids are same. The final enhanced result is then obtained by mixing them as follows: where l represents the number of layers of the pyramids, U d is the up-sampling operator with d = 2 l − 1, and G{·} and L{·} represent Gaussian operator and Laplace operator, respectively.

Method Summary
We summarize the whole procedure of the proposed method in Figure 3. Figure 4 illustrates the results after each intermediate step. Here the first image (I 1 ) for fusion in Figure 4 is obtained according to QCFs (2).

Results and Discussion
In this section, we present our experimental results and evaluate the performance of our method. First, we present our experiment settings. Then, we evaluate the proposed method by comparing it with other seven state-of-the-art methods in both subjective and objective aspects.

Experiment Settings
To fully evaluated the proposed method, we tested hundreds of images under different low illumination conditions from various scenes. All test images came from Wang et al. [17] and Loh et al. [39]. We performed all the experimental images in MATLAB R2017b on a PC running Windows 10 OS with 2.1 GHz Intel Pentium Dual Core Processor and 32G RAM.
In the following, we set l = 4 in (9), which denotes a moderate number of layers; and we set γ = 0.4545 in (2) and (3), which is just the traditional option. At the end of this section, we will analyze the sensitivity of the parameters l in (9), and γ in (2) and (3).

Objective Evaluation
Subjective evaluation alone is not convincing; therefore, we also need to evaluate the performance of our method objectively. There are a large number of objective image quality assessment (IQA) methods, including full-reference IQA [40], reduced-reference IQA [41], and no-reference IQA (NR-IQA) [42,43]. It is not easy to obtain ground truth images under ubiquitous non-uniform illumination conditions but here we adopt six no-reference IQA (NR-IQA) indexes for objective evaluation: natural image quality evaluator (NIQE) [42], integrated local NIQE (IL-NIQE) [43], lightness-order error (LOE) [24], no-reference image quality metric for contrast distortion (NIQMC) [44], colorfulness-based patch-based contrast quality index (C-PCQI) [45], and oriented gradients image quality assessment (OG-IQA) [46]. NIQE builds a simple and successful space domain natural scene statistic model and then extracts features from it to construct a "quality aware" collection of statistical features. It can measure deviations from statistical regularities. IL-NIQE can successfully obtain local distortion artifacts by integrating the features of natural image statistics derived from a local multivariate Gaussian model. LOE measures the error between the original image and its enhanced version to objectively evaluate the naturalness preservation. Lower values for NIQE, IL-NIQE, and LOE, all indicate better image quality. NIQMC is based on the concept of information maximization and can accurately judge which has larger contrast and better quality between two images. C-PCQI yields a measure of visual quality using a regression module of 17 features through analysis of contrast, sharpness, and brightness. OG-IQA can effective provide quality-aware sources of information by using the image relative gradient orientation. Higher values for NIQMC, C-PCQI, and OG-IQA, all indicate better image quality.
Six IQA indexes for the five low-light images shown in Figures 5-9 by different methods are given in Tables 1-6, respectively. The bolding of the numbers denotes the best one.      Moreover, to verify the stability of the proposed method, the average assessment results of 100 images are given in Table 7. One can see that the scores for NIQE and IL-NIQE, that were acquired from our method based on QCFs (2), both rank first, and the scores for NIQMC, OG-IQA, and LOE rank second, third, and fourth out of nine, respectively. One can see that the score for C-PCQI, which is acquired from our method based on QCFs (3), ranks first, the scores for both IL-NIQE and LOE rank second out of nine, and the scores for OG-IQA and NIQMC are in equal third place and equal fourth place, respectively. Hence, we can come to a conclusion that our method based on QCFs (2) achieves the best high-quality for image enhancement and has medium capability of naturalness preservation. Objectively speaking, our method based on QCFs (3) also achieves good scores and hence has good potential for image enhancement.  The post hoc multiple comparison test was also conducted using the Matlab function "multcompare". The significant difference results from QCFs (2) and (3) for different indexes are shown in Figures 10 and 11, respectively. The numbers of significant different groups from QCFs (2) and (3) for different indexes are listed in Table 9.  [15], 2 for FHHS [26], 3 for MF [25], 4 for BIMEF [28], 5 for NPIE [17], 6 for LIME, 7 for LECARM [30] (2) for different indexes. From top to bottom, the horizontal axis denotes nine methods, viz., 1 for NPEA [15], 2 for FHHS [26], 3 for MF [25], 4 for BIMEF [28], 5 for NPIE [17], 6 for LIME, 7 for LECARM [30]  The timing performance of different methods is given in Table 10. In general, any algorithm, which is based on iterative learning or machine learning, such as NPEA, FHHS, NPIE, or LIME, is time-consuming. Compared to other methods, our method has a moderate computational performance, since it does not need iteration to output results.

Sensitivity of the Parameters l and γ
The sensitivity of parameter l in (9) and parameter γ in (2) and (3) are also compared. With respect to the number of layers of the pyramids in the fusion procedure (9), we set l = 4 from the perspective of comprehensive consideration of performance and payload. From the above experiments, we can see that the choice was appropriate. Now we first fix γ in QCFs (2) or (3), then enumerate a small range of values around 4 and compute the corresponding average performance indexes for QCFs (2) and (3). The results for QCFs (2) and (3) with different l and γ = 0.4545 are reported in Tables 11 and 12, respectively. There is only a slight difference between the indexes from different l. Figure 12 shows three low-light images and the corresponding enhanced ones with l = 3, 4, and 5, respectively. From Figure 12 we can see that the enhanced images from different l do not differ much in visual effect. We can see the enhanced images from l = 4 have high contrast and satisfactory details. Comprehensively considering indicators and visual effects, one can conclude that l = 4 is a good selection.  As far as the parameter γ is concerned, the experiments were analogous. We set its value around 0.4545, and the corresponding average performance indexes with fixed l for QCFs (2) and (3) were calculated. The results for QCFs (2) and (3) with different γ and l = 4 are reported in Tables 13 and 14, respectively. One can see that the parameter γ = 0.4545 can balance payload and performance and hence is a good selection.

Conclusions
In this paper, we first define two couples of quasi-symmetric correction functions (QCFs) to enhance low-light images. Compared with traditional gamma correction, proposed QCFs cannot only improve the overall image brightness, but also can better preserve image details. Moreover, we propose an effective multi-scale fusion method to combine a globally-enhanced image by QCFs and a local enhanced image by CLAHE. Experiment results from many images showed that our method can significantly improve the contrast of low-light images and successfully achieve a balanced detail enhancement for dark regions and bright regions. From the perspective of subjective evaluation and objective evaluation, our proposed method has a better performance than other state-of-the-art methods. As future work, we will try to apply the proposed QCFs for image dehazing, underwater image enhancement, and image enhancement in poor visibility conditions and so on. We will also explore applications of QCFs in automatic and on-line detection system for product defects and real-time intelligent monitoring for production lines based on machine vision.