Next Article in Journal
F(R,G) Cosmology through Noether Symmetry Approach
Previous Article in Journal
Fuzzy Attribute Expansion Method for Multiple Attribute Decision-Making with Partial Attribute Values and Weights Unknown and Its Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Smart System for Low-Light Image Enhancement with Color Constancy and Detail Manipulation in Complex Light Environments

1
College of Computer Science and Technology, Sichuan University, Chengdu 610065, China
2
Department of Computer Science COMSATS University Islamabad, Sahiwal Campus 57000, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(12), 718; https://doi.org/10.3390/sym10120718
Submission received: 19 November 2018 / Revised: 30 November 2018 / Accepted: 3 December 2018 / Published: 5 December 2018

Abstract

:
Images are an important medium to represent meaningful information. It may be difficult for computer vision techniques and humans to extract valuable information from images with low illumination. Currently, the enhancement of low-quality images is a challenging task in the domain of image processing and computer graphics. Although there are many algorithms for image enhancement, the existing techniques often produce defective results with respect to the portions of the image with intense or normal illumination, and such techniques also inevitably degrade certain visual artifacts of the image. The model use for image enhancement must perform the following tasks: preserving details, improving contrast, color correction, and noise suppression. In this paper, we have proposed a framework based on a camera response and weighted least squares strategies. First, the image exposure is adjusted using brightness transformation to obtain the correct model for the camera response, and an illumination estimation approach is used to extract a ratio map. Then, the proposed model adjusts every pixel according to the calculated exposure map and Retinex theory. Additionally, a dehazing algorithm is used to remove haze and improve the contrast of the image. The color constancy parameters set the true color for images of low to average quality. Finally, a details enhancement approach preserves the naturalness and extracts more details to enhance the visual quality of the image. The experimental evidence and a comparison with several, recent state-of-the-art algorithms demonstrated that our designed framework is effective and can efficiently enhance low-light images.

1. Introduction

In daily life, people receive information from images, music, videos, etc., and the human brain is capable of effectively processing such visual information. In the modern age of smart phones in which social media is so popular, many people have become interested in capturing and sharing photos. The photos captured on professional or mobile phone cameras are impacted by various weather conditions, which influence the image quality. Thus, the important contents of an image are not always clearly visible. The conditions that most often lead to the degradation of such image quality include bad weather, low illumination, and moving objects, among many others. The influence of such conditions on the image quality can make it difficult for the human eye to clearly identify the contents of the image. Images with clear visibility tend to depict more details, and the useful contents of the image can be more easily identified. Enhancement techniques are often used to make the hidden content of images visible, and they aim to facilitate the usability of valuable information for human and computers alike [1]. Thus, image enhancement is one of the fundamental research topics in image processing. To remove darkness and extract meaningful contents are very important tasks in applications such as medical imaging [2], object tracking [3], face detection [4], facial attractiveness [5], and object detection [6]. Therefore, such enhancement techniques play an important role in many fields. For this purpose, many different approaches have been designed. These algorithms deal with different aspects of image quality, such as dark areas, noise, light distortion, texture details, color, etc. Indeed, the process of removing dark areas improves image quality. However, the models used to enhance low-light images should not only remove the darkness from images, but also preserve the important contents of the image with good efficiency [7]. Images can be degraded as a result of several different conditions, and it is not enough to correct only one of the factors that lowers the image quality. For example, a technique that is used to improve the brightness or contrast of an image may not be well-suited for images that have high saturated portions. Thus, many important factors need to be addressed when applying image enhancement algorithms.
The existing image enhancement algorithms can be divided into two broad categories: local and global. Regardless of the spatial distribution of the pixels, global image enhancement affects all of an image’s pixels, while local enhancement considers the spatial distribution of an image’s pixels. To remove dark areas from low-light images is one of the simplest and most intuitive approaches. However, this approach may create problems when a portion of an image is too bright or too heavily saturated, causing the image to lose meaningful details. To solve this problem, several enhancement techniques have been used for high intensity transformations, such as logarithms [8], the power law equation [9], and gamma functions [10]. Histogram equalization (HE) is another simple and widely used method of avoiding saturation [11]. Furthermore, several algorithms have built upon the HE method and tuned some parameters to preserve the contrast [12] and brightness [13] of an image. Nevertheless, some portions of local details might be lost while using global processing, because this method cannot be used on all the local portions. Using spatial distribution to take pixels and then apply HE locally by using a strategy of sliding window can lead to better results. The most widely used method in image enhancement is Retinex theory (RT) [14], which decomposes color into two factors (i.e., illumination and reflectance). RT is utilized in many enhancement techniques.
Early techniques based on Retinex (i.e., single-scale Retinex [15] and multi-scale Retinex [16]) considered the final enhanced result of reflectance, which usually appears under- or over-enhanced. Recently, the proposed technique in [17] achieved good results by inverting images in a photometric negative form and then performing an optimized dehazing technique. The recent model presented in [18] has been used for illumination estimation and simultaneous reflectance. The method enhanced the target images by processing the illumination along with the estimated reflectance. This is the simplest way to estimate the illumination of the target images and to enhance it to make visible the hidden contents of an image. To preserve naturalness and deal with other visual artifacts, such as unbalanced light, dark areas, haze, dark lighting of one side, and nighttime images (e.g., Figure 1), are challenging tasks.
In this paper, a smart framework has been designed to adjust the exposure based on a camera response model and to remove dark areas based on an estimated illumination map and a Retinex algorithm. A dehaze algorithm converts the contrast-free image into a photometric negative form and obtains an enhanced result. Furthermore, gamma transformation controls the intensity and refines the contrast of the image. Then, the color constancy sets the true and perceptually uniform color of the image. Finally, the detail manipulation feature enhances the details of the image. Moreover, the proposed model efficiently minimizes the computational cost to enhance images more quickly and cost-effectively. This framework is different from past research paradigms because it addresses important image artifacts such as dark lighting, haze, noise, color, and texture details.
Contribution: This framework uses a camera response function and Retinex, which remove the dark lighting by estimating an illumination map. In contrast to the traditional methods based on Retinex in which an image is decomposed into illumination and reflectance, our method estimates only the illumination of the image, which reduces the computational cost. To improve the contrast and remove haze, a dehazing and intensity transformation technique is utilized. Moreover, color constancy sets the true color for images based on the idea that the distribution of the color derivatives reflects the variation in the direction of the light source. Thus, the average of the color derivatives is measured to estimate the direction of the lighting. The detail manipulation sharpens the textural contents, and the weighted least squares method, exposure, and boosting factors are used to enhance the details of the image at different levels.
The remaining sections of the paper are organized as follows: Section 2 briefly provides a literature review of low-light image processing. Section 3 shows a step-by-step explanation of the proposed framework. The validation of the proposed model is shown in Section 4, and Section 5 is the conclusion.

2. Literature Review

Image enhancement is an important tool in image and signal processing, and also a broad category that encompasses individual tasks, such as color enhancement [19], contrast enhancement [9], detail enhancement [20], and composition enhancement [21]. To understand the evolution of the field of research, we begin with the famous Retinex Theory (RT), which is part of most enhancement algorithms. The fundamental idea behind RT is the decomposition of the image into a reflectance and illumination map [14], where reflectance is treated as the final enhanced result, but sometimes this model alters the color of the image or produces an over-enhanced result. Furthermore, the RT has been used by other enhancement models [22,23] with slight changes to the traditional parameters to achieve better results. For instance, Gao et al. [1] proposed the LIME model based on RT. This model individually estimated the illumination for each pixel and found the total maximum value in the three channels of the image. Although, the model efficiency and results were good, the gamma correction required a non-linear operation to resize and adjust the enhanced illumination map. The extra post-processing steps reduced the strength of the model, and the enhanced results often suffered from color distortion.
The classical problems of image processing (i.e., image decomposition) also arise in enhancement tasks. In reference [18], a weighted variational model was used to estimate illumination and reflectance. The computational complexity of this method was slightly high, because it simultaneously performed operations in two channels. Dong et al. [17] also used an interesting method to enhance images. First, the images were converted into a photometric negative state, which looked similar to hazy images, and then, a dehazing model was used to achieve the final enhanced results. Later, Song et al. [24] improved upon this model and solved an issue relating to blocking artifacts. This model worked well, but to some extent, it lacked a physical aspect. The camera response function has also been used for variety of different tasks [25,26]. The framework used in [27] was the first to borrow illumination for image fusion to obtain the weight matrix and set the final exposure ratio to be underexposed. This model, however, has a limited ability to improve images in which one area is over-enhanced. In this case, the over-enhanced area becomes more enhanced, resulting in the loss of important details.
The method in [28] enhanced the contrast in nighttime images by using the concept of lab color space. Then, the model in [29] enhanced images based on histogram and fuzzy-logic. In that model, the histogram value of an image was calculated, finding the average value of the intensity. To obtain chromatic details, the RGB image was decomposed for HSV. The model achieved good results at a reduced computational cost. However, important details were lost while applying this algorithm, when part of the image had too much contrast and the other parts had less contrast. Hao et al. [30] proposed a simple model to estimate illumination and applied a guided filter to divide textural patterns in a refined illumination map. The technique was related to low-light illumination map estimation LIME [1], but the illumination estimation was a bit different in both strategies, and one of the big limitations of this technique was that the constancy of true color of the finished images still needed to be enhanced. Guo et al. [31] presented a model to enhance the color quality of images and highlighted the contrast of dark areas according to the characteristics of human vision and logarithm transformation. Additionally, gamma transformation was used to enhance the contrast of an image. As a result, this algorithm achieved improved color restoration results.
A wavelet-based algorithm was proposed in [32] for color enhancement. The use of the Euler–Lagrange formula worked in conjunction with wavelets to find detail coefficients. The method removed the color cast from both over- and underexposed images. Such light distortion is often found in images where the light environment is very complex. The strategy was used to combine different existing algorithms to [33] enhance images that have poor illumination conditions. After estimating the weak illumination, the final enhanced result was obtained by combining multiple results with the help of a multi-scale pyramid. The algorithm worked well and performed multiple tasks, such as nighttime, backlighting, and non-uniform illumination. The limitation of this model was that some visual artifacts still needed to be addressed, such as color and noise. Thus, the image enhancement schemes adopted different ways of achieving the target results. Based on the aforementioned comprehensive concepts, we designed a smart system for image enhancement to solve multiple complex issues at once.

3. Proposed Framework

Images that are badly affected by different illumination conditions are still challenging tasks. The proposed framework handles several conditions and preserve the image quality. The steps are follows:

3.1. Camera Response Model

In our proposed framework, the initial step was to remove the low-light portion from images using a camera response model (CRM) technique, as shown in Figure 2. This model can be divided into two parts: the bright transform function (BTF) and the camera response function (CRF). The parameters for the CRF are determined by the camera, and those for the BTF are determined by the exposure ratio. In general, a camera uses non-linear functions, such as de-mosaicking and white balance to improve the overall visual quality of an image. The non-linear function is used as shown in Equation (1):
P = f ( L e )
where P represents the pixel value and Le is the irradiance of the image. To transform the brightness, a mapping function was applied to two images (Px and Py) as shown in Equation (2). The images were slightly different in terms of the level of exposure, as shown in Equation (2):
P x = b ( P y , e )
where b is the BTF, and e represents the exposure ratio. The CRF equation can be easily derived using Equations (1) and (2) as follows:
b ( f ( L e ) , e ) = f ( e . L e )
P i = b ( P , E v a l )
To remove the dark areas from an image, Equation (4) further extends the exposure ratio to Eval and b. As shown in Equation (4), Pi and P are the input and the output images. Furthermore, RT is utilized to get the exposure ratio, and we can put the values into a non-linear function by inputting RT into Equation (5) as follows:
L e = R T
where the Le is the irradiance of the image, R is the reflectance, and T is the illumination map. Thus, we can get the desired input and output image from Equation (5) and then plug the RT values into Equation (6).
P x = f ( L e ) , P y = f ( R )
The relation of e and T can be derived using a combination of Equations (2) and (5). It is the most efficient way to collect the dark areas of an image in large ratio and the bright portions in small ratio. Additionally, the RT equation is used to show the exposure Kval, and it is derived by estimating T using Equations (7) and (8).
P y = f ( R ) E q . 5 f ( L e . ( 1 / T ) E q . 3 b ( f ( L e ) , 1 / T ) = b ( P x , 1 / T )
E v a l = 1 / T
Two images from same scene have been taken to calculate b for the BTF, but the exposure should not be the same, as shown in Equation (2). Therefore, the two parameters for the BTF are modeled as follows:
P x = b ( P y , e ) = β P y λ
The parameters β , γ for the BTF in Equation (9). also have a relationship with the exposure ratio (b). It has been observed from Equation (9) that the color channels almost have equal model parameters. The basic reason behind this is that the response curves for the different channels of the images are usually the same for most general cameras. To obtain the CRF model and find the relationship between β , γ , we need to plug in b = β f γ from Equation (3) into Equation (10).
f ( e . L e ) = β f ( L e ) γ
In Equation (10), the f closed-form, which was derived in [34], is used as given below:
f ( L ) = e b ( 1 L e ) , i f γ 1
f ( L ) = L c , i f γ = 1
a = log k γ , b = ln β 1 γ
c = log k β
With the condition γ 1 , a, b are model parameters, while for γ = 1 , c is considered to be a model parameter. Additionally, the above parameters and the estimation of the exposure ratio is important for final equation to enhance each pixel in the low-light image. For this purpose, we need to use Equation (8) to estimate the illumination map. As the exposure ratio and illumination map are inversely proportional, we must first solve T and then E v a l . The estimation process is the same as in [1], but it is slightly different with respect to the weighted matrix. We consider the light component for the initial estimation for every individual pixel x and weighted matrix as follows:
L x = max P κ ( x ) κ { R , G , B }
W m ( x ) = 1 | y w ( x ) d L ( y ) | + ε d { c , r }
where w ( x ) is the local window, which applies to pixel x , and ε is used to prevent a zero denominator. The filter d takes v (vertical) and h (horizontal). This optimization technique solves T for the illumination map as shown below:
M i n T = x ( ( T ( x ) L ( y ) ) 2 + λ d { h , v } w d ( x ) ( d T ( x ) ) 2 | d L ( x ) | + ε )

3.2. Dehazing and Intensity Transformation

While discussing the enhancement of the dark areas, one cannot ignore the issue of haze or fog. The dehazing algorithm is applied to remove haze and improve the overall contrast. First, the contrast-free images are converted into a photometric negative form and treated as hazy images, as shown in Figure 2. These inverted images are further enhanced by the dehazing algorithm and then inverted back to obtain the enhanced results. The images are inverted using Equation (18):
R c ( x ) = V r P c ( x )
where c is the RGB color channel. P c ( x ) is the color channel intensity of the input image pixel, and R c ( x ) is the inverted image intensity. We utilized the algorithm in [35] to estimate the emitted light t ( x ) as follows:
t ( x ) = 1 ω min c { r , g , b } ( min y Ω ( x ) ( R c ( y ) A c ) )
where R c is the intensity of the pixel, A c is the global atmosphere, and t ( x ) represents the emitted light from the scene. To estimate t ( x ) , we first selected 100 pixels of minimum intensities in RGB. Then, among all these pixels, we picked a single pixel whose highest value was the sum of the three channels (RGB). The RGB pixel values can be used for A ; therefore, according to the dehazing model in [35], we can recover the intensity of original scene, as shown in Equation (20):
J ( x ) = R ( x ) A t ( x ) + A
After dehazing, the intensity transformation is used in the spatial domain (i.e., Gamma transformation). This simple technique either darkens or brightens the intensity on the basis of the gamma values. In the case of a heavily saturated image, where the CRM makes the image brighter, the power law equation is utilized to preserve the contrast and brightness of the image.

3.3. Color Constancy

The color of the image is a very important factor to handle in enhancement schemes. The algorithm we use for color constancy sets true color for the input images. The Minkowski norm [36] is utilized for color constancy. The technique is based on the idea that the distribution of the color derivatives shows the largest variation in the direction of the light source. To estimate this direction, the average of these derivatives is taken. This technique is applied to the visible contents to determine the true color of an image. The goal is to achieve the correct color, evaluate the light source chromaticity, and then set the canonical illumination of an image. The color constancy can be mathematically modeled as follows:
f c ( x ) = ω e ( λ , x ) s ( λ , x ) c ( λ ) d λ
c ( λ ) = R ( λ ) G ( λ ) B ( λ )
where f c ( x ) is the ( R G B ) T Lambertian surface, ω is the visible spectrum, λ is the wavelength, e ( λ , x ) is the light source, s ( λ , x ) denotes the surface reflectance, and c ( λ ) is the function for camera sensitivity. The color constancy is important in an image enhancement model, because color is a significant visual factor and it should not be ignored.

3.4. Details Enhancement

The final step of the proposed framework is to enhance the details of the image. The weighted least squares method is used to obtain the final enhanced result with more detail. To preserve the edges, obtain input image q , seek to achieve improved details, and enhance the image g , the process is expressed mathematically as follows:
s ( ( g s q s ) 2 + λ ( a x , s ( q ) ( δ g δ x ) s 2 + a y , s ( q ) ( δ g δ x ) s 2 ) )
where s denotes the pixel spatial location and the data term ( g s q s ) 2 is used to minimize the distance between the two images g and q . The second term is a regularity term that can be used to minimize partial derivatives, and λ keeps the balance between these two terms. As g , is extracted from q , it is based on a non-linear operator F λ that depends on q :
g = F λ ( q ) = ( I + λ L q ) 1 q
Although this is a spatially variant operator and it is difficult to examine the frequency response [37], we focused on the image regions that would not contain meaningful edges. Moreover, to recover q and the layers of detail, each time the value of λ increased from the initial value λ by factor c , as shown in Equation (25):
g i + 1 = F c i λ ( q )
Construct decomposition at three levels was used to manipulate the details and the contrast at different levels of an image. There were two levels for detail d 1 , d 2 and a coarse level b of the CIELAB channel. The exposure and boosting factors were also used: δ 0 is the base level, and δ 1 , and δ 2 were the layers of detail. The process for manipulating T at every pixel p is shown in Equation (26):
T p = μ + S c ( δ 0 , η b p μ ) + S c ( δ 1 , d p 1 ) + S c ( δ 2 , d p 2 )
where S c denotes the sigmoid curve (i.e., S c ( a , x ) = 1 / ( 1 + exp ( a x ) ) ), and μ is lightness range mean. The term in Equation (24), S c ( δ 0 , η b p μ ) controls the contrast and exposure of the base layer. The other terms control the details.

4. Evaluation and Results

This section elaborates on the evidence of several experiments to properly analyze the overall performance of the proposed framework. Therefore, publicly available datasets (LIME [1], NUS [38], UAE [39], MEF [40], and VV [41]) were used to check the quality and efficiency of our framework. The code implementation was done in MATLAB 2016a, and each experiment was properly conducted on a PC with the following specifications: Windows 10 OS, 2.5 GHz CPU, and 4 GB RAM (Random-Access Memory). The parameters of the CRM in the proposed framework were fixed such that ε = 0.001 , λ = 1 , and the local window size w ( x ) = 5 . In the CRM, we assumed that information about the camera was not provided, so a = 0.3293 , b = 1.1258 was taken to fit with most camera models. Additionally, the detail manipulation λ was set to 0.3. All the experimental results were conducted according to the aforementioned parameters. The time-consuming part in the CRM is the optimization of the illumination map. To improve the efficiency, the pre-conditioned conjugate gradient ( Ο ( N ) ) was used with the CRM. The quantitative and qualitative performance of the proposed framework was tested with several well-known, state-of-the-art methods, such as DONG [17], Multi Retinex [42], NPE [43], SRIE [18], LIME [1], BIMEF [44], and LSTWC [45]. It’s clear from the results that our proposed framework is efficient and preserve the enhanced image quality.
Moreover, the performance and image quality were measured using several non-reference and full reference quality assessment methods, such as the peak signal-to-noise ratio (PSNR), mean square error (MSE), visual information fidelity (VIF) [46], visual saliency index (VIS) [47], and Naturalness image quality evaluator (NIQE) [48]. A visual comparison along with other methods is shown in Figure 3, and their image quality assessment is shown in Table 1, respectively. The proposed method was applied to these images, and the quality of these images had been degraded from several conditions such as fog, poor texture details, color, and noise, as shown in Figure 4.

4.1. Light Distortion

There is a simple way to check the light distortion in images. To measure the quality performance of the proposed framework, we selected several images from available datasets to find a lightness error (LOE). Mathematically, it is defined as follows:
LOE = 1 P n = 1 P R d ( n )
where P denotes the pixel number, and R d is the difference of the relative order between the input image and the enhanced image. Further, the equation for R d can be written as follows:
R d ( n ) = y = 1 P U ( L ( n ) , L ( y ) ) U ( L ( n ) , L ( y ) )
where denotes an exclusive operation, and L ( n ) , L ( n ) represent the maximum values in RGB at location n . The U ( k , j ) function is 1 in the case of k > = j , otherwise the value is 0.
To reduce the running time complexity, down-sampling was required for the images. To measure the LOE the images were set to 100 × 100, as shown in Table 2 and Figure 5.

4.2. Color Distortion

The color checker dataset (NUS [38] and UAE [39]) represents the image captured along with a color checker board. To measure the color distortion between the input and the enhanced image, Δ E the color difference is defined in the same way as the Euclidean distance of the two colors with respect to CIE Lab color space (CIELAB).
Δ E = ( L 1 L 2 ) 2 + ( a 1 a 2 ) 2 + ( b 1 b 2 ) 2
where L * is lightness, and a * , b * are blue-yellow and green-red, respectively. CIELAB is designed to be perceptually uniform to human vision. The average of RGB is calculated in the enhanced image, and then, each pixel value is mapped in the lab space. Finally, the difference Δ E is calculated, and by using this method, we can determine the color distortion. The results are shown in Table 3 and Figure 6, respectively.

4.3. Running Time Comparison

The quality of a well-enhanced image is important, and the model should also be fast enough to obtain the desired results in a timely manner. As shown in Figure 7, the proposed method enhances images quickly as compare with other methods. LSTWC, BIMEF, NPE, and SIRE are a little bit slower and tend to produce light distortion. Thus, the final enhanced result of these methods is quite good, but the results still require enhancement of some image aspects.
Furthermore, MSRCR and Dong enhance images with high speed, but they create high light distortion. LIME achieved visually pleasing results but suffered from light distortion. As we increase the size of the images, the performance of LIME, and SRIE become more time-consuming. As we observed with respect to our proposed framework, the computational cost is not high, and the model produces quality results, addressing many important visual aspects of an image.

5. Conclusions and Future Work

To the best of our knowledge, most articles related to image enhancement in the extant literature have presented traditional approaches, and the findings of these studies have provided few analytical results with respect to low-light image enhancement. The fundamental problems solved by the proposed method that have been raised by the current enhancement algorithms are the result of various factors. In this paper, we proposed an effective and efficient framework to enhance normal and low-light images. The main purpose of low-light image enhancement is to make visible the important contents of an image and to preserve the overall image quality. Therefore, enhancement schemes must enhance images with less distortion and good efficiency. However, the main theoretical contributions of this paper are as follows: First, the camera parameters and exposure were set to estimate an illumination map and the Retinex algorithm was utilized to remove dark areas from images. To handle factor like dense fog and to keep the balance between contrast and brightness, a dehazing algorithm and an intensity transformation algorithm were used. Furthermore, the color constancy significantly improved the color appearance, setting the true color of an image. A detail manipulation algorithm was also added, which was based on the weight least squares method and served to boost the details of the image to create an enhanced result with better visual quality. The experimental results revealed that the proposed model performed multiple tasks and it was effective as compared with other state-of-the-art techniques. The results offered meaningful insights and suggested that low-light image enhancement can be applied in many vision-based applications (e.g., object recognition, edge detection, feature matching, image classification, surveillance, and tracking systems).
Moreover, the authors believe that the present study could be expanded for future research. The limitations are as follows: With respect to images that have been particularly degraded due to extremely bright and dark portions, the enhancement algorithm over-enhanced the bright portion. Thus, important details were lost, and the quality of the image was diminished. Apart from this, the CRM algorithm used fixed parameters for the camera model. Therefore, in the future, researchers could extend this work and add a deep learning strategy to properly set camera parameter values according to the individual camera information and scene exposure.

Author Contributions

Z.R. performed the experiments and wrote the manuscript; Y.P. analyzed the results and perform mentorship research activity; M.A. and F.U. guided and revised the manuscript; Q.D. perform investigation process and evidence collection. All authors read and approved the final manuscript.

Funding

The work was supported by the National Natural Science Foundation of China under Grants 61571312, Academic and Technical Leaders Support Foundation of Sichuan province under Grants (2016)183-5, National Key Research and Development Program Foundation of China under Grants 2017YFB0802300.

Acknowledgments

The authors are grateful for the comments and reviews from the reviewers and editors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
  2. Zhu, H.; Chan, F.H.; Lam, F.K. Image contrast enhancement by constrained local histogram equalization. Comput. Vis. Image Underst. 1999, 73, 281–290. [Google Scholar] [CrossRef]
  3. Bernardin, K.; Stiefelhagen, R. Evaluating multiple object tracking performance: The CLEAR MOT metrics. J. Image Video Process. 2008, 2008, 1. [Google Scholar] [CrossRef]
  4. Jin, L.; Satoh, S.; Sakauchi, M. A novel adaptive image enhancement algorithm for face detection. In Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, Cambridge, UK, 26 August 2004; pp. 843–848. [Google Scholar]
  5. Leyvand, T.; Cohen-Or, D.; Dror, G.; Lischinski, D. Data-driven enhancement of facial attractiveness. ACM Trans. Gr. (TOG) 2008, 27, 38. [Google Scholar]
  6. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 580–587. [Google Scholar]
  7. Ghita, O.; Whelan, P.F. A new GVF-based image enhancement formulation for use in the presence of mixed noise. Pattern Recognit. 2010, 43, 2646–2658. [Google Scholar] [CrossRef]
  8. Panetta, K.; Agaian, S.; Zhou, Y.; Wharton, E.J. Parameterized logarithmic framework for image enhancement. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2011, 41, 460–473. [Google Scholar] [CrossRef] [PubMed]
  9. Stark, J.A. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 2000, 9, 889–896. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Huang, S.-C.; Cheng, F.-C.; Chiu, Y.-S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2013, 22, 1032–1041. [Google Scholar] [CrossRef] [PubMed]
  11. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Gr. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  12. Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
  13. Chen, S.-D.; Ramli, A.R. Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans. Consum. Electron. 2003, 49, 1310–1319. [Google Scholar] [CrossRef]
  14. Land, E.H. The retinex theory of color vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef] [PubMed]
  15. Liu, J.-P.; Zhao, Y.-M.; Hu, F.-Q. A nonlinear image enhancement algorithm based on single scale retinex. J.-Shanghai Jiaotong Univ.-Chin. Ed. 2007, 41, 685–688. [Google Scholar]
  16. Jobson, D.J.; Rahman, Z.-U.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Dong, X.; Pang, Y.A.; Wen, J.G. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2010. [Google Scholar]
  18. Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
  19. Lee, J.Y.; Sunkavalli, K.; Lin, Z.; Shen, X.; So Kweon, I. Automatic content-aware color and tone stylization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2470–2478. [Google Scholar]
  20. Bhutada, G.G.; Anand, R.S.; Saxena, S.C. Edge preserved image enhancement using adaptive fusion of images denoised by wavelet and curvelet transform. Dig. Signal Process. 2011, 21, 118–130. [Google Scholar] [CrossRef]
  21. Zha, X.-Q.; Luo, J.-P.; Jiang, S.-T.; Wang, J.-H. Enhancement of polysaccharide production in suspension cultures of protocorm-like bodies from Dendrobium huoshanense by optimization of medium compositions and feeding of sucrose. Process Biochem. 2007, 42, 344–351. [Google Scholar] [CrossRef]
  22. Matin, F.; Jung, Y.; Park, K.-H. Multiscale Retinex Algorithm with tuned parameter by Particle Swarm Optimization. Korea Inst. Commun. Sci. Proc. Symp. Korean Inst. Commun. Inf. Sci. 2017, 6, 1636. [Google Scholar]
  23. Lin, H.; Shi, Z. Multi-scale retinex improvement for nighttime image enhancement. Optik-Int. J. Light Electron Opt. 2014, 125, 7143–7148. [Google Scholar] [CrossRef]
  24. Song, J.; Zhang, L.; Shen, P.; Peng, X.; Zhu, G. Single low-light image enhancement using luminance map. In Proceedings of the Chinese Conference on Pattern Recognition, Chengdu, China, 5–7 November 2016; pp. 101–110. [Google Scholar]
  25. Tai, Y.-W.; Chen, X.; Kim, S.; Kim, S.J.; Li, F.; Yang, J.; Yu, J.; Matsushita, Y.; Brown, M.S. Nonlinear camera response functions and image deblurring: Theoretical analysis and practice. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2498–2512. [Google Scholar]
  26. Huo, Y.; Zhang, X. Single image-based HDR image generation with camera response function estimation. IET Image Process. 2017, 11, 1317–1324. [Google Scholar] [CrossRef]
  27. Ying, Z.; Li, G.; Ren, Y.; Wang, R.; Wang, W. A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Ystad, Sweden, 22–24 August 2017; pp. 36–46. [Google Scholar]
  28. Qian, X.; Wang, Y.; Wang, B. Fast color contrast enhancement method for color night vision. Infrared Phys. Technol. 2012, 55, 122–129. [Google Scholar] [CrossRef]
  29. Raju, G.; Nair, M.S. A fast and efficient color image enhancement method based on fuzzy-logic and histogram. AEU-Int. J. Electron. Commun. 2014, 68, 237–243. [Google Scholar] [CrossRef]
  30. Hao, S.; Feng, Z.; Guo, Y. Low-light image enhancement with a refined illumination map. Multimed. Tools Appl. 2017, 77, 29639–29650. [Google Scholar] [CrossRef]
  31. Guo, H.; Zhang, G.; Mei, C.; Zhang, D.; Song, X. Color enhancement algorithm for low-quality image based on gamma mapping. In Proceedings of the Sixth International Conference on Electronics and Information Engineering, Dalian, China, 3 December 2015; p. 97941X. [Google Scholar]
  32. Provenzi, E.; Caselles, V. A wavelet perspective on variational perceptually-inspired color enhancement. Int. J. Comput. Vis. 2014, 106, 153–171. [Google Scholar] [CrossRef]
  33. Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
  34. Mann, S. Comparametric equations with practical applications in quantigraphic image processing. IEEE Trans. Image Process. 2000, 9, 1389–1406. [Google Scholar] [CrossRef] [Green Version]
  35. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar]
  36. Buchsbaum, G. A spatial processor model for object colour perception. J. Frankl. Inst. 1980, 310, 1–26. [Google Scholar] [CrossRef]
  37. Fattal, R.; Agrawala, M.; Rusinkiewicz, S. Multiscale shape and detail enhancement from multi-light image collections. In Proceedings of the ACM Transactions on Graphics (TOG), San Diego, CA, USA, 5–9 August 2007; Volume 26, p. 51. [Google Scholar]
  38. Cheng, D.; Prasad, D.K.; Brown, M.S. Illuminant estimation for color constancy: Why spatial-domain methods work and the role of the color distribution. JOSA A 2014, 31, 1049–1058. [Google Scholar] [CrossRef]
  39. Lynch, S.; Drew, M.; Finlayson, G. Colour constancy from both sides of the shadow edge. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, NSW, Australia, 2–8 December 2013; pp. 899–906. [Google Scholar]
  40. Ma, K.; Zeng, K.; Wang, Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
  41. Vonikakis, V.; Andreadis, I.; Gasteratos, A. Fast centre–surround contrast modification. IET Image Process. 2008, 2, 19–34. [Google Scholar] [CrossRef]
  42. Petro, A.B.; Sbert, C.; Morel, J.-M. Multiscale retinex. Image Process. On Line 2014, 71–88. [Google Scholar] [CrossRef]
  43. Wang, S.; Zheng, J.; Hu, H.-M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
  44. Ying, Z.; Li, G.; Gao, W. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv, 2017; arXiv:1711.00591. [Google Scholar]
  45. Łoza, A.; Bull, D.R.; Hill, P.R.; Achim, A.M. Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients. Dig. Signal Process. 2013, 23, 1856–1866. [Google Scholar] [CrossRef]
  46. Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A new image fusion performance metric based on visual information fidelity. Inf. Fusion 2013, 14, 127–135. [Google Scholar] [CrossRef]
  47. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed]
  48. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a Completely Blind Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 22, 209–212. [Google Scholar] [CrossRef]
Figure 1. Various conditions that negatively affect image quality, such as (a) bad weather; (b) dark light; (c) one side with darkness and one side with normal contrast; and (d) haze or fog.
Figure 1. Various conditions that negatively affect image quality, such as (a) bad weather; (b) dark light; (c) one side with darkness and one side with normal contrast; and (d) haze or fog.
Symmetry 10 00718 g001
Figure 2. The general flowchart of the proposed framework. CRM = camera response model.
Figure 2. The general flowchart of the proposed framework. CRM = camera response model.
Symmetry 10 00718 g002
Figure 3. Visual representation and comparison of the various methods. The odd rows show the results of various enhancement schemes, and the even rows show the extracted regions of interest (ROI).
Figure 3. Visual representation and comparison of the various methods. The odd rows show the results of various enhancement schemes, and the even rows show the extracted regions of interest (ROI).
Symmetry 10 00718 g003
Figure 4. A comparison of the results of the handling of several visual artifacts. For example, in the first row, the sky color, clouds, wall texture, and dark regions were addressed. The haze and color were treated well in the images given in second row. The texture details, trees, and leaf colors were enhanced well in the images given in the third, fourth, and fifth rows, respectively.
Figure 4. A comparison of the results of the handling of several visual artifacts. For example, in the first row, the sky color, clouds, wall texture, and dark regions were addressed. The haze and color were treated well in the images given in second row. The texture details, trees, and leaf colors were enhanced well in the images given in the third, fourth, and fifth rows, respectively.
Symmetry 10 00718 g004
Figure 5. Lightness distortion visual representation and comparison of the various algorithms. The lightness error (LOE) value range is 0–5000. The lower values of the LOE indicate that the images were slightly degraded from light distortion, while the higher values of the LOE indicate that heavy light distortion occurred in the images.
Figure 5. Lightness distortion visual representation and comparison of the various algorithms. The lightness error (LOE) value range is 0–5000. The lower values of the LOE indicate that the images were slightly degraded from light distortion, while the higher values of the LOE indicate that heavy light distortion occurred in the images.
Symmetry 10 00718 g005
Figure 6. Results of the color distortion; the enhanced result can be zoomed in to clearly see the difference.
Figure 6. Results of the color distortion; the enhanced result can be zoomed in to clearly see the difference.
Symmetry 10 00718 g006
Figure 7. Running time comparison of the various methods.
Figure 7. Running time comparison of the various methods.
Symmetry 10 00718 g007
Table 1. Result of the quality assessment along with the running time.
Table 1. Result of the quality assessment along with the running time.
MethodsPSNRMSEVIFVSINIQETIME
BIMEF11.293580.0742411.072100.9983043.08151.179365
NPE9.1324910.122110.8914930.964692.96661.717044
SRIE12.65080.0543151.204740.9854323.13482.765191
LSTWC11.146750.0767940.918050.9853323.22171.14674
LIME10.214650.0951781.6309410.9798862.87441.269965
Ours16.472280.0283642.1666390.999222.67301.128813
Table 2. Result comparison of the light distortion.
Table 2. Result comparison of the light distortion.
MethodsNUSUEAVVNPELIMEMEF
MSRCR303415782728188118291678
NPE41168981763914681146
Dong7111337848102112391057
LSTWC99188110119941189979
BIMEF7897548649127901021
SRIE414657560530819747
LIME14289601168109013171063
Ours420488423501492337
Table 3. Comparison of the color distortion on the color checker datasets.
Table 3. Comparison of the color distortion on the color checker datasets.
DatasetMSRCRNPEDongSRIELIMELSTWCBIMEFOurs
UEA26.9319.5921.6023.7026.1820.9921.2317.99
NUS22.0419.8924.5118.5527.7921.5519.6616.89

Share and Cite

MDPI and ACS Style

Rahman, Z.; Aamir, M.; Pu, Y.-F.; Ullah, F.; Dai, Q. A Smart System for Low-Light Image Enhancement with Color Constancy and Detail Manipulation in Complex Light Environments. Symmetry 2018, 10, 718. https://doi.org/10.3390/sym10120718

AMA Style

Rahman Z, Aamir M, Pu Y-F, Ullah F, Dai Q. A Smart System for Low-Light Image Enhancement with Color Constancy and Detail Manipulation in Complex Light Environments. Symmetry. 2018; 10(12):718. https://doi.org/10.3390/sym10120718

Chicago/Turabian Style

Rahman, Ziaur, Muhammad Aamir, Yi-Fei Pu, Farhan Ullah, and Qiang Dai. 2018. "A Smart System for Low-Light Image Enhancement with Color Constancy and Detail Manipulation in Complex Light Environments" Symmetry 10, no. 12: 718. https://doi.org/10.3390/sym10120718

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop