Next Article in Journal
Estimating Community-Level Plant Functional Traits in a Species-Rich Alpine Meadow Using UAV Image Spectroscopy
Previous Article in Journal
Location Accuracy Improvement of Long-range Lightning Detection Network In China by Compensating Ground Wave Propagation Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancement and Noise Suppression of Single Low-Light Grayscale Images

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
Mechanical and Electronic Engineering Department, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(14), 3398; https://doi.org/10.3390/rs14143398
Submission received: 4 May 2022 / Revised: 22 June 2022 / Accepted: 23 June 2022 / Published: 15 July 2022

Abstract

:
Low-light images have low contrast and high noise, making them not easily readable. Most existing image-enhancement methods focus on color images. In the present study, an enhancement and denoising algorithm for single low-light grayscale images is proposed. The algorithm is based on the multi-exposure fusion framework. First, on the basis of the low-light tone-mapping operators, the optimal virtual exposure image is constructed according to the information entropy criterion. Then, the latent low-rank representation is applied to two images to generate low-ranking parts and saliency parts to reduce noise after fusion. Next, the initial weight map is constructed based on the information contained in the decomposed images, and an adaptive weight refined algorithm is proposed to restore as much structural information as possible and keep the details while avoiding halo artifacts. When solving the weight maps, the decomposition and optimization of the nonlinear problem is converted into a total variation model, and an iterative method is used to reduce the computational complexity. Last, the normalized weight map is used for image fusion to obtain the enhanced image. The experimental results showed that the proposed method performed well both in the subjective and objective evaluation of state-of-the-art enhancement methods for low-light grayscale images.

Graphical Abstract

1. Introduction

In contemporary battlefields and target reconnaissance, there is an urgent need for detection technologies under low-illumination, low-visible-light conditions. With the development of low-light complementary metal-oxide-semiconductor sensors, the quality of low-light images has greatly improved. However, low-light images still have a number of problems, such as lack of details, dark colors, high noise, and low brightness, which restrict the application of low-light images.
Some efforts have been made to improve the quality of low-light images. According to the image-enhancement method, existing methods can be divided into histogram equalization (HE)-based methods, Retinex-based methods, and dark channel prior (DCP)-based methods. HE algorithms improve the image contrast and brightness by adjusting the image histogram and use nonlinear stretching. The algorithms are simple, have low time complexity, and can effectively improve the brightness and contrast of low-light images [1]. However, the HE algorithms enhance the whole image without fine-tuning any image details, so the enhanced image tends to have amplified noise, and the image could be over-enhanced. Recently, some improved HE algorithms have been proposed, such as local histogram equalization, bi-histogram equalization [2], minimum mean brightness error bi-histogram equalization [3], and background brightness–preserving histogram equalization [4]. A new meta-heuristic algorithm called the barnacle mating optimizer was used for image contrast enhancement and converted an image to a solution of the optimization problem. This method achieved better results compared to the traditional HE methods [5]. A global and adaptive contrast enhancement algorithm for low illumination gray images based on the bilateral gamma adjustment function and the particle swarm optimization (PSO) was proposed. The proposed algorithm improved the overall visual effect of the low illumination gray images and avoided over-enhancement in the local area [6]. A novel method for contrast enhancement using shadowed sets was presented, and the proposed method achieved acceptable performance and least loss of information [7]. In general, these algorithms still have the problem of over-enhancement in practical applications and cannot reduce noise.
A method based on the Retinex theory was proposed to improve the quality of low-light images. The method assumes that the observed image can be decomposed into two components: reflection and illumination map. If the reflection and illumination map can be accurately separated, then the brightness of the original image can be improved by adjusting the intensity of the illumination map. Guo et al. initialized the illumination map by selecting the maximum value among the red, green, and blue channels at each pixel position [8]. They proposed an image-enhancement method that optimized regularized light and suppressed deep noise. Then, a deep-learning-based blind denoising framework was introduced to promote the visual quality of the enhanced images [9]. A weighted variational model to estimate both the reflection and illumination map from an observed image was proposed. Unlike conventional variational models, the model can preserve the estimated reflectance with more details [10]. A novel generative strategy for Retinex decomposition was proposed, by which the decomposition is cast as a generative problem [11]. The Retinex-based methods take into account the dynamic range compression and edge enhancement at the same time, yet they have difficulty accurately reflecting the brightness of the scene based on a single image. Moreover, the method decomposes a single image into two images, which is an ill-conditioned problem, resulting in limitations in the estimation of the nonlinear illuminance component. Moreover, halo artifacts are prone to appear visually, and the noise interference is not reduced.
Several methods [12,13] have been proposed based on the assumption of a dark channel prior (DCP). Weighted fusion of robust retinex model and dark channel prior based enhancement method have been proposed in the literature [14]. Although the dehazing method improves the quality of low-light images to a certain extent, it lacks a physical mechanism for image enhancement, and it easily causes halo artifacts.
With the rapid development of deep learning, conventional neural networks (CNNs) have been widely applied in the field of low-light image enhancement. ZH proposed a two-stage low-light image signal processing network and a two-branch network to reconstruct the low-light images and enhance textural details [15]. An auto-encoder and convolutional neural network (CNN) to train a low-light enhancer to first improve the illumination and later improve the details of the low-light image in a unified framework was proposed to avoid issues such as over-enhancement and color distortion [16]. By treating the low-light enhancement as a residual learning problem, that is, to estimate the residual between low- and normal-light images, Wang et al. proposed a novel deep lightening network that benefited from the recent advancement of CNNs [17]. An enhancement method based on GAN (generative adversarial network) was proposed to enhance low-light images and image details simultaneously [18]. An MEF algorithm for gray images is proposed based on the decomposition CNN and weighted sparse representation [19]. An image enhancement algorithm based on the BP neural network was proposed. The BP neural network was employed to predict and reconstruct the processing coefficients of the image model and obtained a good visual effect [20]. In general, machine learning methods can achieve good image quality, yet most algorithms must be trained on many expert-retouched images, and the enhancement algorithm depends heavily on the number and diversity of the training sets. Thus, the generality of such methods to other images still needs to be improved.
Inspired by the fusion of high-dynamic-range (HDR) images, a new image-enhancement method was proposed [21]. Specifically, the illumination map of the input low-light image was used to generate illumination maps of different virtual exposure levels, and then the illumination maps were fused with the hue and saturation information of the input image to obtain the enhanced result [21]. Wang et al. and Fu et al. adopted the same fusion method: based on the initial illumination map, they used different nonlinear functions to enhance the brightness of the illumination map, then extracted the Laplacian image pyramid and the Gaussian pyramid that were used as the weights. Next, they fused multiple illumination maps with different exposure levels to obtain the illumination map with enhanced brightness, thereby producing the enhanced image [22,23]. The only difference between their two studies lies in the calculation of the initial illumination map. Wang et al. used the luminance channel of the hue–saturation–value color space as the initial illumination map, while Fu et al. used an image decomposition method based on a guided filter to extract the initial illumination map. Ying et al. analyzed the difference between low-exposure images and normal-exposure images and used a statistical simulation function to simulate the exposure process of the image. By changing the parameters of the function, images with different exposure levels were obtained. Then, pixel-level fusion of multiple virtual exposure images was carried out to obtain the final enhanced image. Their experimental results showed that the method significantly improved the image brightness and that the enhanced image retained the vivid colors and had a high fidelity [24,25]. Weighted sparse representation and a guided filter in the gradient domain were proposed to retain image edges more adequately in gray images [26].
Through research and analytical comparison of various algorithms, we found that there are many methods for grayscale images enhancement, however there are fewer algorithms only used for low-light grayscale images enhancement. Low-illumination images have areas that are too bright or too dark, and if these imaging characteristics are not taken into account, direct application of grayscale enhancement algorithms to low-light images will bring over-enhancement and loss of detail information. However, most algorithms are developed for color low-light images enhancement; of course, these methods can also be used for low-light grayscale images and achieve certain enhancement effects. To apply these methods to grayscale images, it is first necessary to convert the grayscale image into a pseudo-color image (red=green=blue=original grayscale image). Compared with color images, grayscale images have only one channel and have less information available for image enhancement, resulting in loss of details or halo artifacts. In addition, few algorithms take into account the amplification noise in the enhancement process for low-light images. In practical applications, it is challenging to simultaneously obtain multiple images of the same scene with different exposures, which limits the application of deep learning methods to them.
To address the above problems, we propose a new single-grayscale-image-enhancement method based on multi-exposure fusion. First, the method of the virtual image construction is proposed based on the inverse tone mapping operator. This is the basis for enhancement using a multi-exposure fusion approach when the input is a single low-light image. The global structure map and local structure map are then obtained based on the latent low-rank representation (LatLRR), thereby achieving the denoising effect. Then, adaptive weight maps are designed for the decomposed images to preserve image details. An adaptive optimization model of low-rank weight map is proposed to avoid halo artifacts and obtain better visual effects. Last, the enhanced image is obtained through image fusion. The proposed method not only preserves the detailed information and enhances the visual effect but also achieves denoising. The contributions of this study are as follows:
  • The method of the virtual image construction is proposed based on the inverse tone mapping operator. Image information entropy is applied in order to make the virtual image with the optimal exposure ratio, such that the algorithm could adopt the multi-exposure fusion framework using a single low-light image as the input.
  • The idea of image decomposition based on LatLRR and the separate fusion of low-rank and saliency parts after weight normalization is proposed. This process avoids amplifying the noise in the fusion process and achieves the effect of noise reduction.
  • According to characteristics of low-light grayscale images, adaptive weighting factors were constructed for the decomposed global and local structures to avoid over enhancement and enhance the visual effects.
  • An adaptive optimization model of a low-rank weight map is proposed to retain image details and avoid halo artifacts. In the meantime, the total variational method is applied to the establishment and solution of the model. The nonlinear problem was converted into a total variation model to reduce the computational complexity.
The remainder of this paper is organized as follows: in Section 2, the proposed single low-light grayscale image enhancement algorithm is presented. The experimental results and analysis are shown in Section 3. The conclusions are presented in Section 4.

2. Proposed Low Light Image Enhancement Method

Our framework mainly consists of four main components: (1) Virtual image construction. The optimal virtual image is generated from the original low light image based on the inverse mapping function. (2) Image decomposition and noise suppression. LatLRR is used to decompose the source and the virtual images to obtain the low rank and saliency structures separately. In this process, noise is removed from the image simultaneously. The low rank and saliency parts are processed separately during the subsequent processing. (3) Weight generation, which determines the weight maps of the low rank and saliency parts separately. Additionallyn adaptive optimization model of low-rank weight map is proposed to retain image details and avoid halo artifacts. The total variational method is applied to the establishment and solution of weight maps. (4) Multi-exposure fusion. Two decomposed low rank parts are reconstructed into a new low rank image. In the meantime, two decomposed saliency parts are reconstructed into a new saliency image. Lastly, the two newly images are fused to obtain the final enhanced image. The flowchart is shown in Figure 1. The detail of each component is described in the following sections.

2.1. Virtual Image Construction

In this step, the linear expansion method based on the global model proposed by Akyüz was used. Specifically, low-light mapping and tone-mapping operators were used on an HDR image to facilitate screen display [27]. The mathematical model is as follows:
X d ( x , y ) = X m ( x , y ) ( 1 + X m ( x , y ) X w h i t e 2 ) 1 + X m ( x , y )
In the original equation, X d ( x , y ) represents the brightness value of pixel ( x , y ) in the low-dynamic-range image, X w ( x , y ) represents the brightness value of the HDR image, X w h i t e represents the minimum brightness value that is mapped to white light, and X m ( x , y ) is the initial brightness value based on ratio α X w , H 1 of the HDR image. Additionally, α is a quantification parameter. The larger the α value is, the brighter the image after quantization is. X w , H is the harmonic mean of the brightness of the HDR image.
In this study, the above equation was introduced to the virtual image construction process, and X m ( x , y ) was calculated as follows:
X m ( x , y ) = α X w ( x , y ) X w , H
Substituting Equation (2) in Equation (1):
α 2 X w h i t e 2 X w , H 2 X w 2 ( x , y ) + α X w , H ( 1 X d ( x , y ) ) X w ( x , y ) X d ( x , y ) = 0
Equation (3) is a quadratic equation of X w ( x , y ) , so X w ( x , y ) can be obtained by solving the quadratic equation. Since the brightness value of the HDR image obtained by inverse tone mapping cannot be negative, a positive value is set in this study:
X w ( x , y ) = X w h i t e 2 X w , H α ( X d ( x , y ) 1 + ( 1 X d ( x , y ) ) 2 + 4 X w h i t e 2 X d ( x , y ) )
For simplification, the variable β is introduced: β = X w , H α . Since the maximum brightness X w , max of the image after inverse tone mapping must be the maximum brightness X d , max in the low-light image, the following equation is obtained:
β = X w , max X w h i t e 2 ( X d ( x , y ) 1 + ( 1 X d ( x , y ) ) 2 + 4 X w h i t e 2 X d ( x , y ) )
Assuming that after image normalization, the maximum brightness value of the normalized image is 1, that is, X d , max = 1 ; then, we have
β = X w , max 2 X w h i t e
The virtual image construction operator is obtained:
X w ( x , y ) = f ( X d , γ ) = 1 2 X w , max X w h i t e ( X d ( x , y ) 1 + ( 1 X d ( x , y ) ) 2 + 4 X w h i t e 2 X d ( x , y ) )
where γ = X w h i t e and X w , max =1. In Equations (1)–(7), original images are normalized to [0~1]. So, the intermediate image f ( X d , γ ) is a normalized image. To construct an image with a different exposure ratio from X d , the main parameter is X w h i t e . If X w h i t e is too high, the low brightness values of the original image will be mapped to very low values in the image X w , and the high brightness values in the original image will be mapped close to the maximum brightness value.
Because a well-exposed image can provide rich information for the human eye, to obtain the optimal exposure image, the image information entropy was adopted to automatically calculate the value of γ in real time according to the input image. The one-dimensional image entropy is a statistical form of the feature that reflects the amount of average information in an image. The one-dimensional information entropy is calculated as follows:
H ( X ) = i = 0 N 1 p i log p i
p i = f i M N
where X is an image and H ( X ) represents the image information entropy; i represents grayscale value in the image X , i = 0 ~ 255 ; N1 is the maximum grayscale value of the image, N1 = 255; p i is the probability of grayscale value I; f i is the number of times that the grayscale value i appears in the image X ; M is the number of image rows; and N is the number of image columns.
As the value of γ increases, the image information entropy first increases and then decreases. Thus, the information entropy can be used to determine the optimal γ :
γ = arg max γ H ( f ( X d , γ ) )
When solving for the optimal γ , f ( X d , γ ) should be mapped to [0~255] to calculate the information entropy. Additionally, the image is down-sampled to reduce the amount of computation. In Figure 2, image (a) is an original low-light image. Based on Equation (10), γ is changed from 20 to 60 (the corresponding normalized value is 20/255~60/255), and the corresponding intermediate image is calculated, as shown in Figure 2b–f. It can be seen that the information entropies are 3.79, 4.17, 4.85, 4.08, and 3.82, respectively. Obviously, image (d) has the highest information entropy, so it is selected as the optimal image. In the following section, the source image is denoted as X s r c , and X s r c is used as input for X d in Equation (7), and the virtual images generated by Equation (7) are denoted as X v i r .

2.2. Image Decomposition and Noise Suppression

Low-light images typically have many noises. To improve the signal-to-noise ratio (SNR) of the enhanced image, LatLRR [28,29] was utilized to decompose the source image X s r c and the intermediate image X v i r . LatLRR is efficient and robust to noise and outliers. LatRR decomposes the image into a global structure, a local structure, and sparse noise. The solution model of LatLRR decomposition can be defined as follows:
[ Z , L , E ] = min Z , L , E Z * + L * + λ E 1 , s . t . , X = X Z + L X + E
where λ is the equilibrium factor,   .   * denotes the nuclear norm,   .   1 is l 1 n o r m , X is an image with the size of M × N , Z is the low-rank coefficient, L is the saliency coefficient, and E is the sparse noise. Then, the low-rank component XZ ( X L ), saliency component LX ( X S ), and sparse noisy component E can be derived. The noise is removed, and only the low-rank and saliency components are input for fusion processing.
An example of LatLRR decomposition using Equation (11) is shown in Figure 3. The image in Figure 3a is the source image. Figure 3b depicts the noise in the 3D display, which is suppressed using Equation (10). Figure 3c shows the low-rank component of the image. Figure 3d depicts the saliency features.
The Equation (11) was utilized to decompose the source image X s r c and the virtual image X v i r . After low-rank decomposition, the source image X s r c was decomposed into X L s r c and X s s r c . Additionally, the virtual image X v i r was decomposed into X L v i r and X s v i r . In this paper, X L was used to represent the low-rank images, X L ={ X L s r c , X L v i r }, and X s was used to represent the saliency images, X s ={ X s s r c , X s v i r }. In the following process, they are treated separately and then fused in the end to remove the noise.

2.3. Weight Generation

For the image enhancement algorithm based on multi-exposure fusion, the weights of the images directly affect the fusion result. In this study, different weight map construction methods were used after decomposition to achieve the best visual effect, avoid the halo artifacts, and preserve as many image details as possible.

2.3.1. Low-Rank Component

The low-rank component contains global information, energy information, and brightness and contrast information about the image. First, a contrast factor is constructed. The initial contrast weight is constructed by Equation (12):
D 0 = ψ 2 ( | X L H l a p l a c i a n | )
H l a p l a c i a n = [ 0 1 0 1 4 1 0 1 0 ]
where X L is the low-rank image, H l a p l a c i a n is the Laplacian operator, is the convolutional symbol, and |   .   | means the absolute value. ψ 2 represents a two-dimensional Gaussian filter; the Gaussian filter kernel is 0.5, and the size of Gaussian core is 7. D 0 is the initial contrast weight.
To avoid the halo artifacts and retain details and texture information, a weight map optimization operator D L is proposed. The solution equation of D L is
D L = min D L D L ( x ) D 0 ( x ) 2 2 + λ 1 ( h D L 1 + v D L 1 ) + λ 2 ( G ( D L ) )
G ( D L ) = y Ω ( x ) | v D L ( y ) | | y Ω ( x ) v D L ( y ) | + y Ω ( x ) | h D L ( y ) | | y Ω ( x ) h D L ( y ) |
where x represents the pixels in the image; . 2 2 and | | . | | 1 are the 2 norm and 1 norm, respectively; λ 1 and λ 2 are weight factors; v and h are the first-order gradient in the vertical and horizontal directions, respectively, in a w × w neighborhood with x as the center; and y represents the pixels within Ω ( x ) . In this study, w = 11 . The first term in Equation (14) is used to minimize the difference from the initial weight factor, the second term ensures the continuity of the weight factor, and the third is used to preserve the image detail and to avoid the halo artifacts. For convenience, pixel x is omitted in the expression below.
To solve Equation (14), two intermediate variables are introduced: D v and D h , and v D L and h D L are the first-order gradient of D L in the vertical and horizontal directions. Then, the unconstrained Equation (14) is rewritten as
D L = min D L D L D 0 2 2 + λ 1 ( D h 1 + D v 1 ) + λ 2 ( G ( D L ) )
The equivalent of Equation (15) is
D L = min D L D L D 0 2 2 + λ 1 ( D h 1 + D v 1 ) + λ 2 ( G ( D L ) ) + β 1 2 D h h D L 2 2 + β 2 2 D v v D L 2 2 s . t . , D v = v D L , D h = h D L
where β 1 and β 2 are positive constants. Equation (17) can be solved by a two-step iterative method. The first step is to calculate D h and D v :
[ D h , D v ] = min D h , D v λ 1 ( D h 1 + D v 1 ) + β 1 2 D h h D L 2 2 + β 2 2 D v v D L 2 2
The second step is to calculate D L by substituting D h and D v into the following equation:
D L = min D L D L D 0 2 2 + β 1 2 D h h D L 2 2 + β 2 2 D v v D L 2 2 + λ 2 ( G ( D L ) )
According to reference [30], the L1-norm optimization problem in Equation (18) can be directly solved:
D h = max ( h D L λ 1 β 1 , 0 )
D v = max ( v D L λ 1 β 2 , 0 )
To solve Equation (19), two intermediate variables are introduced, namely, D and F ( D ) = D L D 0 2 2 + β 1 2 D h h D L 2 2 + β 2 2 D v v D L 2 2 ; then, Equation (19) can be decomposed into a forward-splitting component Equation (22) and a backward-splitting component Equation (23) using the proximal forward–backward splitting framework [30]:
D = D L t F ( D )
D L = min D L D L D ~ 2 2 + λ 2 t ( G ( D L ) )
where F ( D ) is the derivative of F ( D ) and t is the iteration step coefficient. Equation (22) can be solved by the relative total variation model as described in [31].
Up to this point, the decomposition and optimization of the nonlinear problem have been converted into a variational model, and the numerical value of the corrected weighting factor can then be solved through iterations, as shown in Algorithm 1.
Algorithm 1. Solution Process of the Weight Factor D L
1: Initialize D 0
2: For k = 0 to K, k is the number of iterations
3: Update D h and D v according to Equations (20)–(21)
4: Update D according to Equation (22)
5: Update D L according to Equation (23)
6: End for
7: Output the optimal weight factor D L

2.3.2. Saliency Component

The saliency part contains prominent local features and special brightness distributions. This paper proposes as a texture factor of saliency the part designed as Equation (24).
D s ( x ) = X s ( x ) μ s a
where X s is the saliency image, x represents the pixels in the image, and D s is the saliency weight map of X s . The first term   .   is the norm of the image, and μ S is the average value of saliency image. a is a gain parameter and is set to 3.

2.3.3. Weight Normalization

After low-rank decomposition, the low-rank component X L ={ X L s r c , X L v i r } and saliency component X s ={ X s s r c , X s v i r } of the source image X s r c and the intermediate image X v i r were obtained, and the weight maps of four images after decomposition were constructed according to Section 2.3.1 and Section 2.3.2. The weights of the low-rank components are D L s r c and D L v i r , and the weights of the saliency components are D S s r c and D S v i r .
Finally, the weights of the four images need to be normalized:
D L v r i ~ = D L v i r / ( D L s r c + D L v i r )
D L s r c ~ = D L s r c / ( D L s r c + D L v i r )
D S v i r ~ = D S v i r / ( D S s r c + D S v i r )
D S s r c ~ = D S s r c / ( D S s r c + D S v i r )
To achieve good enhancement performance, the weight map of a Gaussian pyramid was generated based on D L s r c ~ , D L v i r ~ , D S s r c ~ , and D S v i r ~ . Laplacian Pyramid fusion was also used to obtain the fusion results [32].

2.4. Multi-Exposure Fusion

To reduce the computational complexity, we only used the source image and the generated intermediate virtual image for fusion. First, the decomposed low-rank images X L = { X L s r c , X L v i r } were fused to obtain X ~ L , and the saliency images X s = { X s s r c , X s v i r } were fused to obtain X ~ S .
X ~ L = D L v i r ~ X L v i r + D L s r c ~ X L s r c
X ~ S = D S v i r ~ X S v i r + D s s r c ~ X S s r c
Last, X ~ S and X ~ L were fused to obtain the final enhanced image:
X ~ = X ~ L + X ~ S
The steps of the low light image-enhancement method are listed in Algorithm 2.
Algorithm 2. Low-light grayscale image-enhancement method proposed in this study
Input: Low-light grayscale image X s r c ;
Output: Enhanced image X ~ ;
1. Generation of intermediate virtual image X v i r by Equations (1)–(10).
2. LatLRR decomposition of the source and intermediate images by Equation (11).
3. Weight map construction of the low-rank component by Equations (12)–(23).
4. Weight map construction of the saliency component by Equation (24).
5. Weight map normalization by Equations (25)–(28).
6. Image fusion by Equations (29)–(31).
7. Output the enhanced image X ~ .

3. Experimental Results and Analysis

In this section, the parameters of the proposed method were first analyzed, and then the proposed method was compared with eight state-of-the-art algorithms [6,8,9,12,16,18,24,31] in the aspects of visual effect and objective evaluation indices. On the basis of image dehazing, reference [12] proposed a model that can directly use the DCP-based method to deal with the inverted image. In reference [24], the CRF model was used to construct a virtual image, and then the multi-exposure fusion framework was used to achieve image denoising. Reference [8] was based on Retinex: the light of each pixel was first estimated individually by finding the maximum value in the red, green, and blue channels. Then, the initial light map was refined by imposing a structure prior on it. Reference [31] was also based on Retinex. It estimated the latent components and performed low-light image enhancement based on a deep learning framework. Reference [6] is proposed based on the bilateral gamma adjustment function and combined with the particle swarm optimization (PSO), and the algorithm significantly enhanced the visual effect of the low illumination gray images. Reference [16] proposed a fast and lightweight deep learning-based algorithm for performing low-light image enhancement using the light channel of hue saturation lightness (HSL). This method used a single channel lightness ‘L’ of HSL color space instead of traditional RGB color channels to reduce time consumption. Reference [9] enhanced the low-light images through regularized illumination optimization and deep noise suppression. Reference [18] proposed an enhancement method based on GAN (generative adversarial network).
The models selected for comparison involved a typical Retinex model, a DCP dehazing model, a deep learning model, and a multi-exposure fusion framework as shown in Table 1.
Experiments were carried out using MATLAB 2020a on a computer with an Intel Core i7 3.40-GHz CPU, 16 GB of RAM, and the Microsoft Windows 10 operating system. The machine learning experiments were implemented in in Python 3.8.0 and Python3.8.0 can be downloaded from the website: https://www.python.org/downloads/release/python-380/ accessed on 3 May 2022.

3.1. Parameter Settings

Most of the existing datasets have only color images, and the algorithm in this study was designed to enhance low-light grayscale images. Thus, a low-light camera (G400BSI) was used to capture low-light grayscale images in this study. The proposed method consists of two types of parameters: (1) the maximum brightness L w , max and γ in the virtual image construction step, where L w , max is the normalized brightness value, L w , max = 1. In the proposed method, γ is obtained by calculating the information entropy, and γ = 10, i.e., the lower limit, to improve the iteration speed. In LatLRR, γ = 0.8. (2) is the low-rank weight construction process, the normalization factors λ 1 and λ 2 are used to balance the ratio, the penalty factors β 1 and β 2 are used to ensure the stability of the solution, and t affects the speed of the solution. Increasing λ 2 causes more blurriness, though many textures are still retained. λ 1 controls the smoothness of the weight factors. In this study, many experiments were carried out on these parameters. The cross-checking test method is used. The parameters λ 1 and λ 2 vary from 1 to 10, and the interval is 1. β 1 and β 2 vary from 1 to 10, and the interval is 1. Additionally, t varies from 0.01 to 2, and the interval is 0.01. The objective evaluation indexes of 20 groups of data (in Section 3.3) under each set of parameters are calculated, the optimal combination of parameters is found corresponding to evaluation indexes of each set of parameters, and they are chosen as empirical parameters. The final parameter settings were λ 1 = 2 , λ 2 = 1 , β 1 = 1 , β 2 = 1 , and t = 0.5 .

3.2. Subjective Analysis

The captured images were enhanced by different methods. The results are shown in Figure 4 and Figure 5. In Figure 4a there are four low-light images, (b–j) are the results of bio-inspired multi-exposure fusion (BIMEF), low-light image enhancement (LIME), DCP, RetinexDIP, bilateral gamma adjustment function (BGA), hue saturation lightness (HSL), deep noise suppression (DNS), generative adversarial network (GAN), and the method proposed in this study, respectively. Since the images are too large to be included in this manuscript, partially enlarged views are shown in Figure 4k, and from top to bottom, the images are the results of BIMEF, LIME, DCP, RetinexDIP, BGA, HSL, DNS, GAN, and the proposed method. It can be seen that the results of LIME, DCP, and RetinexDIP showed obvious halo artifacts and significant noise. Specifically, RetinexDIP and BGA over-enhanced the images. In terms of visual effects, DNS, GAN, and the proposed method showed superior performance. Figure 4k is a partially enlarged view of the third image. The proposed method not only enhanced the details of the dark regions but also preserved the texture of roads. The proposed method yielded significantly lower noise in the roads than the other methods. Figure 4l is a partially enlarged view of the fourth image. It can be seen that the proposed method enhanced the texture of the details in the house, without any halo artifacts.
Figure 5a shows four original low-light images. Figure 5b–j are the results of BIMEF, LIME, DCP, RetinexDIP, BGA, HSL, DNS, GAN, and the method proposed in this study, respectively. It can be seen that LIME and RetinexDIP had obvious halo artifacts. The BGA algorithm results in over- and under-enhancement, as shown in Figure 5f,; the road of the second image is under-enhanced, and the screen of the fourth is over-enhanced. In contrast, the DNS algorithm works better overall. The GAN method produces halo artifacts on the first image and the third image. The last one of Figure 5j shows the enhancement result of our method on an indoor image. The proposed method restored the bright screen, as well as the table and book in the dark area. The method proposed in this study retained more detailed information, and the road texture was clear. Additionally, there was no saturation caused by over-enhancement. In Figure 5k, from top to bottom are the comparison results of 9 methods. Figure 5k shows the partially enlarged views of the first image. As we can see from the figure, the proposed algorithm does not produce halo artifacts. Moreover, the image noise was significantly reduced.

3.3. Objective Analysis

Three popular full-reference image quality assessment methods—peak-signal-to-noise ratio (PSNR) [33,34], structural similarity (SSIM) [35], and lightness order error (LOE) [36]—were used to evaluate the enhancement quality by comparing the enhanced image with the ground-truth version. One popular no-reference image quality assessment method—natural image quality evaluator (NIQE) [37]—was also employed to perform blind image quality evaluation. The larger the PSNR and SSIM and the smaller the LOE and NIQE were, the better the image was, that is, the enhanced image looked more natural.
The PSNR, SSIM, LOE, and NIQE of the five methods are shown in Table 2, Table 3, Table 4 and Table 5. The index with the best result is written in bold. As seen from the tables, the method proposed in this study did not perform best in every index, but overall it performed better than the other methods. It has obvious advantages in terms of visual effect and denoising indices.

3.4. Time Precision Analysis

The images used in this study were mostly 1000 × 1000 pixels. The average processing time was calculated to compare the performance of the different methods. The number of iterations of LatLRR used in our method was set to 20. RetinexDIP, GAN, HSL and DNS used a GPU for computation. The same parameters as in the original papers are used. Based on the experimental results as show in Table 6, the BIMEF and LIME methods were relatively fast, whereas the deep learning-based methods were more time-consuming. The method proposed in this study was not the best in terms of computation time, but in terms of the overall performance, it was superior to the other methods.

4. Conclusions

In this study, a single-grayscale-image-enhancement method was proposed based on the multi-exposure fusion framework. First, an intermediate image with the optimal exposure ratio was adaptively constructed based on the low-light mapping tone mapping operator. To achieve denoising, LatLRR decomposition was used to decompose the source image and the intermediate image to obtain the low-rank and saliency components. Then, different weight maps for the decomposed images were constructed. To retain as much detailed information as possible and avoid halo artifacts, a weight map optimization method was proposed. Last, the low-rank and saliency images were fused to yield the enhanced image. Experiments on real scenes validated the effectiveness of the method proposed here.
Although the proposed method can produce high-quality enhanced images, it is not yet able to process an image in real time. Next, the algorithm will be optimized to meet the requirements of practical applications. At the same time, infrared images and medical images also have low contrast and high noise. We will improve the algorithm according to the imaging characteristics and mechanism of different images, so that it has a wider range of applications.

Author Contributions

T.N. provided the idea. T.N., X.W., M.L. and L.H. designed the experiments. H.L., S.N., H.Y. and Y.Z. analyzed the experiments. T.N. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number No. 62105328. The author of the fund is T.N.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank the editors and the anonymous reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Glossary

The letters that are involved in the equations of this paper are introduced briefly in this appendix.
X d represents a low-dynamic-range image and is a matrix;
X w represents a high-dynamic-range image and is a matrix;
X w h i t e represents the minimum brightness value that is mapped to white light and is a number;
X m represents an intermediate variable and is a matrix;
α represents quantification parameter and is a number;
X w , H represents the harmonic mean of the brightness of the HDR image and is a number;
β represents an intermediate variable and is a number;
X d , max represents a the maximum brightness of low-dynamic-range image and is a number;
X w , max represents a the maximum brightness of inverse tone mapping image and is a number;
γ represents an intermediate variable and is a number;
p i represents the probability of grayscale i ( 0 ~ 255 ) in the image and is a number;
f ( X d , γ ) represents an image and is a matrix;
X represents an image and is a matrix;
H(X)represents the image information entropy of X and is a number;
p i represents the probability of grayscale value i and is a number;
f i is the number of times that the grayscale value i appears in the image X and is a number;
Mis the number of image rows;
Nis the number of image columns;
X s r c represents a source image and is a matrix;
X v i r represents a virtual image and is a matrix;
λ represents the equilibrium factor and is a number;
Zrepresents an intermediate variable and is a matrix;
Lrepresents an intermediate variable and is a matrix;
Erepresents noise in the image and is a matrix;
X L represents the low-rank component through LatLRR decomposition and is a matrix;
X S represents the saliency component through LatLRR decomposition and is a matrix;
X L s r c represents low-rank component of X s r c through LatLRR decomposition and is a matrix;
X s s r c represents saliency component of X s r c through LatLRR decomposition and is a matrix;
X L v i r represents low-rank component of X v i r through LatLRR decomposition and is a matrix;
X s v i r represents saliency component of X v i r through LatLRR decomposition and is a matrix;
H l a p l a c i a n represents the Laplacian operator and is a matrix;
D 0 represents the initial contrast weight map and is a matrix;
ψ 2 represents the two-dimensional Gaussian filter function and is a formula symbol;
D L represents the optimization contrast weight map and is a matrix;
λ 1 represents a weight factor and is a number;
λ 2 represents a weight factor and is a number;
G ( D L ) is an intermediate variable and is a matrix;
D v represents the first -order gradient in the vertical and is a matrix;
D h represents the first-order gradient in the horizontal and is a matrix;
x represents the pixels in the image and is a scalar;
β 1 is a positive constant and is a number;
β 2 is a positive constant and is a number;
. 2 2 represents the 2 norm operation and is a symbol;
| | . | | 1 represents the 1 norm operation and is a symbol;
F ( D ) is an intermediate variable and is a matrix;
D is an intermediate variable and is a matrix;
t is the iteration step coefficient and is a number;
F ( D ) is the derivative of F ( D ) and is a matrix;
μ S is the average value of a saliency image and is a number;
D s represents the saliency weight map and is a matrix;
D L s r c represents the low rank weight map of X L s r c and is a matrix;
D S s r c represents the saliency weight map of X s s r c and is a matrix;
D L v i r represents the low rank weight map of X L v i r and is a matrix;
D S v i r represents the saliency weight map of X s v i r and is a matrix;
D L s r c ~ represents the normalized low rank weight map of X L s r c and is a matrix;
D S s r c ~ represents the normalized saliency weight map of X s s r c and is a matrix;
D L v i r ~ represents the normalized low rank weight map of X L v i r and is a matrix;
D S v i r ~ represents the normalized saliency weight map of X s v i r and is a matrix;
X ~ L represents the fusion production of two low rank images and is a matrix;
X ~ S represents the fusion production of two saliency images and is a matrix.

References

  1. Abdullah-Al-Wadud, M.; Kabir, M.; Dewan, M.; Chae, O. A dynamic histograme equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
  2. Kim, Y.-T. Contrast enhancement using brightness preserving bi- histogram equalization. IEEE Trans. Consum. Electron. 1997, 43, 1–8. [Google Scholar]
  3. Chen, S.D.; Ramli, A.R. Minimum mean brightness error bihistogram equalization in contrast enhancement. IEEE Trans. Consum. Electron. 2003, 49, 1310–1319. [Google Scholar] [CrossRef]
  4. Tan, L.T.; Sim, K.S.; Tso, C.P. Image enhancement using backgroundbrightness preserving histogram equalization. Electron. Lett. 2012, 48, 155–157. [Google Scholar] [CrossRef]
  5. Ahmed, S.; Ghosh, K.K.; Bera, S.K. Gray Level Image Contrast Enhancement Using Barnacles Mating Optimizer. IEEE Access 2020, 8, 169196–169214. [Google Scholar] [CrossRef]
  6. Li, C.; Liu, J.; Liu, A.; Wu, Q.; Bi, L. Global and Adaptive Contrast Enhancement for Low Illumination Gray Images. IEEE Access 2019, 7, 163395–163411. [Google Scholar] [CrossRef]
  7. Alavi, M.; Kargari, M. A novel method for contrast enhancement of gray scale image based on shadowed sets. In Proceedings of the 6th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS), Mashhad, Iran, 23–24 December 2020; pp. 1–7. [Google Scholar]
  8. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light image enhancement viaillumination map estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
  9. Guo, Y.; Lu, Y.; Liu, R.W.; Yang, M.; Chui, K.T. Low-Light Image Enhancement with Regularized Illumination Optimization and Deep Noise Suppression. IEEE Access 2020, 8, 145297–145315. [Google Scholar] [CrossRef]
  10. Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.-P.; Ding, X. A weighted variational model for simultaneous re_ectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
  11. Zhao, Z.; Xiong, B.; Wang, L.; Ou, Q.; Yu, L.; Kuang, F. RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1076–1088. [Google Scholar] [CrossRef]
  12. Dong, X.; Pang, Y.; Wen, J. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2010; pp. 1–6. [Google Scholar]
  13. Li, L.; Wang, R.; Wang, W.; Gao, W. A low-light image enhancement method for both denoising and contrast enlarging. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 3730–3734. [Google Scholar]
  14. Thepade, S.D.; Shirbhate, A. Visibility Enhancement in Low Light Images with Weighted Fusion of Robust Retinex Model and Dark Channel Prior. In Proceedings of the 2020 IEEE Bombay Section Signature Conference (IBSSC), Mumbai, India, 4–6 December 2020; pp. 69–73. [Google Scholar] [CrossRef]
  15. Zhu, H.; Zhao, Y.; Wang, R.; Wang, R.; Chen, W.; Gao, X. LLISP: Low-Light Image Signal Processing Net via Two-Stage Network. IEEE Access 2021, 9, 16736–16745. [Google Scholar] [CrossRef]
  16. Garg, A.; Pan, X.-W.; Dung, L.-R. LiCENt: Low-Light Image Enhancement Using the Light Channel of HSL. IEEE Access 2022, 10, 33547–33560. [Google Scholar] [CrossRef]
  17. Wang, L.-W.; Liu, Z.-S.; Siu, W.-C.; Lun, D.P. Lightening network for low-light image enhancement. IEEE Trans. Image Process. 2020, 29, 7984–7996. [Google Scholar] [CrossRef]
  18. Xu, B.; Zhou, D.; Li, W. Image Enhancement Algorithm Based on GAN Neural Network. IEEE Access 2022, 10, 36766–36777. [Google Scholar] [CrossRef]
  19. Chen, G.; Li, L.; Jin, W.; Li, S. High-Dynamic Range, Night Vision, Image-Fusion Algorithm Based on a Decomposition Convolution Neural Network. IEEE Access 2019, 7, 169762–169772. [Google Scholar] [CrossRef]
  20. Wang, Y.; Dang, L. Adaptive low-gray image enhancement based on BP neural network and improved unsharp maskmethod. In Proceedings of the 5th International Conference on Systems and Informatics (ICSAI), Nanjing, China, 10–12 November 2018. [Google Scholar]
  21. Li, S.; Kang, X.; Fang, L. Pixel-Level image fusion: A survey of the state of the art. Inf. Fusion 2017, 33, 100–112. [Google Scholar] [CrossRef]
  22. Wang, Q.; Fu, X.; Zhang, X.P.; Ding, X. A fusion-based method for single backlit image enhancement. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 4077–4081. [Google Scholar]
  23. Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A Fusion-based Enhancing Method for Weakly Illuminated Images. Signal Processing 2016, 129, 82–96. [Google Scholar] [CrossRef]
  24. Ying, Z.; Li, G.; Gao, W. A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement. arXiv 2017, arXiv:1711.00591v1. [Google Scholar]
  25. Ying, Z.; Li, G.; Ren, Y.; Wang, R.; Wang, W. A new image contrast enhancement algorithm using exposure fusion framework. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Ystad, Sweden, 22–24 August 2017; pp. 36–46. [Google Scholar]
  26. Chen, G.; Li, L.; Jin, W.; Qiu, S.; Guo, H. Weighted Sparse Representation and Gradient Domain Guided Filter Pyramid Image Fusion Based on Low-Light-Level Dual-Channel Camera. IEEE Photonics J. 2019, 11, 7801415. [Google Scholar] [CrossRef]
  27. Ashikhmin, M.; Goyal, J. A reality check for tone-mapping operators. ACM Trans. Appl. Percept. 2006, 3, 399–411. [Google Scholar] [CrossRef]
  28. Nie, T.; Huang, L.; Liu, H.; Li, X.; Zhao, Y.; Yuan, H.; Song, X.; He, B. Multi-Exposure Fusion of Gray Images Under Low Illumination Based on Low-Rank Decomposition. Remote Sens. 2021, 13, 204. [Google Scholar] [CrossRef]
  29. Han, X.; Lv, T.; Song, X.; Nie, T.; Liang, H.; He, B.; Kuijper, A. An Adaptive Two-Scale Image Fusion of Visible and Infrared Images. IEEE Access 2019, 7, 56341–56352. [Google Scholar] [CrossRef]
  30. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  31. Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure extraction from texture viarelative total variation. ACM Trans. Graph. 2012, 31, 1–10. [Google Scholar]
  32. Shen, J.; Zhao, Y.; Yan, S.; Li, X. Exposure fusion using boostinglaplacian pyramid. IEEE Trans. Cybern. 2014, 44, 1579–1590. [Google Scholar] [CrossRef] [PubMed]
  33. Hu, L.; Qin, M.; Zhang, F.; Du, Z.; Liu, R. RSCNN: A CNN-Based Method to Enhance Low-Light Remote-Sensing Images. Remote Sens. 2021, 13, 62. [Google Scholar] [CrossRef]
  34. Wang, Z.; Bovik, A.C. Mean squared error: Love it or leave it? IEEE Signal Process. Mag. 2009, 26, 98117. [Google Scholar]
  35. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  36. Wang, S.; Zheng, J.; Hu, H.-M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
  37. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Processing Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Remotesensing 14 03398 g001
Figure 2. An example of virtual image construction. (a) Original image. (b) Image with γ = 20; (c) γ = 30; (d) γ = 40; (e) γ = 50; and (f) γ = 60.
Figure 2. An example of virtual image construction. (a) Original image. (b) Image with γ = 20; (c) γ = 30; (d) γ = 40; (e) γ = 50; and (f) γ = 60.
Remotesensing 14 03398 g002
Figure 3. LatLRR decomposition. (a) Source image; (b) sparse noise; (c) the approximate image; and (d) the saliency image.
Figure 3. LatLRR decomposition. (a) Source image; (b) sparse noise; (c) the approximate image; and (d) the saliency image.
Remotesensing 14 03398 g003
Figure 4. Comparison of different methods. (a) Original images. (b) Results of BIMEF method. (c) Results of LIME method.(d) Results of DCP method. (e) Results of RetinexDIP method. (f) Results of the BGA method. (g) Results of the HSL method. (h) Results of the DNS method. (i) Results of the GAN method. (j) Results of the proposed method. (k) Comparison results of the third image. (l) Comparison results of the fourth image.
Figure 4. Comparison of different methods. (a) Original images. (b) Results of BIMEF method. (c) Results of LIME method.(d) Results of DCP method. (e) Results of RetinexDIP method. (f) Results of the BGA method. (g) Results of the HSL method. (h) Results of the DNS method. (i) Results of the GAN method. (j) Results of the proposed method. (k) Comparison results of the third image. (l) Comparison results of the fourth image.
Remotesensing 14 03398 g004aRemotesensing 14 03398 g004bRemotesensing 14 03398 g004cRemotesensing 14 03398 g004dRemotesensing 14 03398 g004e
Figure 5. Comparison of different methods. (a) Original images. (b) Results of BIMEF method. (c) Results of LIME method. (d) Results of DCP method.(e) Results of RetinexDIP method. (f) Results of the BGA method. (g) Results of the HSL method. (h) Results of the DNS method. (i) Results of the GAN method. (j) Results of the proposed method. (k) Comparison results of the first image.
Figure 5. Comparison of different methods. (a) Original images. (b) Results of BIMEF method. (c) Results of LIME method. (d) Results of DCP method.(e) Results of RetinexDIP method. (f) Results of the BGA method. (g) Results of the HSL method. (h) Results of the DNS method. (i) Results of the GAN method. (j) Results of the proposed method. (k) Comparison results of the first image.
Remotesensing 14 03398 g005aRemotesensing 14 03398 g005bRemotesensing 14 03398 g005cRemotesensing 14 03398 g005d
Table 1. Principles of the models used for comparison.
Table 1. Principles of the models used for comparison.
Method.Principles
BIMEF [24]Multi-exposure fusion framework
LIME [8]Retinex
DCP [12]DCP
Retinex DIP [31]Retinex + learning
BGA [6]Bilateral gamma adjustment function + PSO(one channel)
HSL [16]Deep learning(one channel)
DNS [9]Retinex + deep noise suppression
GAN [18]Retinex + deep learning
The proposed methodMulti-exposure fusion framework
Table 2. Objective evaluation results of PSNR.
Table 2. Objective evaluation results of PSNR.
MethodTest 1Test 2Test 3Test 4Test 5Test 6Test 7Test 8
BIMEF20.249 19.329 24.408 22.163 21.039 20.376 26.374 22.277
LIME17.969 25.64715.793 16.520 17.512 20.211 19.769 13.893
DCP21.040 18.421 24.573 14.991 21.255 21.905 22.653 21.386
RetinexDIP14.326 19.086 18.654 20.991 24.272 20.038 19.737 14.228
BGA15.885 18.041 15.839 19.463 18.595 18.615 16.313 15.948
HLS16.101 18.841 16.036 18.276 18.153 19.997 18.192 19.512
DNP20.055 19.700 25.777 18.222 30.554 20.507 26.80518.597
GAN18.279 18.229 26.451 18.759 16.225 21.93417.606 18.832
Proposed method21.12824.158 27.12023.78732.49820.315 26.534 23.228
Table 3. Objective evaluation results of SSIM.
Table 3. Objective evaluation results of SSIM.
MethodTest 1Test 2Test 3Test 4Test 5Test 6Test 7Test 8
BIMEF0.877 0.8470.905 0.850 0.870 0.694 0.920 0.813
LIME0.804 0.810 0.757 0.677 0.774 0.630 0.776 0.614
DCP0.848 0.823 0.896 0.654 0.866 0.688 0.867 0.757
RetinexDIP0.690 0.798 0.826 0.869 0.871 0.644 0.802 0.607
BGA0.631 0.831 0.667 0.871 0.660 0.801 0.735 0.621
HLS0.832 0.815 0.723 0.598 0.615 0.754 0.718 0.662
DNP0.894 0.786 0.913 0.700 0.856 0.8400.860 0.812
GAN0.887 0.766 0.821 0.8950.840 0.623 0.783 0.708
Proposed method 0.8980.831 0.9660.815 0.9720.690 0.9350.880
Table 4. Objective evaluation results of LOE.
Table 4. Objective evaluation results of LOE.
MethodTest 1Test 2Test 3Test 4Test 5Test 6Test 7Test 8
BIMEF194.004 91.884 81.710 156.480 111.063 299.010 50.570 558.359
LIME268.717 260.524 237.032 291.592 277.186 326.098 230.581 705.394
DCP224.347 305.375 381.013 231.265 287.581 347.963 206.042 470.315
RetinexDIP81.891 91.01851.043 133.550 69.296 332.596 76.450 515.581
BGA259.546 203.262 489.058 283.198 198.173 322.802 217.484 531.990
HLS180.070 180.712 387.269 134.966 90.135 302.916 107.346 463.086
DNP50.635107.978 40.219 120.237 86.747 300.382 125.193 341.070
GAN116.773 103.422 24.744 137.459 84.577 309.898 217.087 310.357
Proposed method78.231 92.129 23.18468.48448.824297.35049.653318.658
Table 5. Objective evaluation results of NIQE.
Table 5. Objective evaluation results of NIQE.
MethodTest 1Test 2Test 3Test 4Test 5Test 6Test 7Test 8
BIMEF5.362 8.310 7.365 7.805 6.650 2.920 5.688 6.659
LIME5.279 7.271 6.535 6.633 6.232 3.147 4.987 4.495
DCP5.588 8.054 6.998 5.2586.423 3.203 5.608 5.129
RetinexDIP5.716 9.544 7.179 8.097 6.559 2.971 5.897 4.417
BGA6.472 8.251 7.275 8.279 7.457 5.879 6.950 6.068
HLS5.296 8.459 5.504 7.573 6.513 4.699 5.531 6.478
DNP4.442 8.063 5.764 8.207 6.650 4.882 4.971 7.641
GAN5.362 6.863 5.3266.637 6.559 4.588 4.991 5.837
Proposed method 4.3856.8335.636 5.679 4.6112.5614.8185.397
Table 6. Comparison of computation time between different methods.
Table 6. Comparison of computation time between different methods.
MethodTime (s)
BIMEF1.03
LIME2.44
DCP3.15
RetinexDIP20.12 (GPU)
BGA 10.69
HSL24.02 (GPU)
DNS30.34 (GPU)
GAN28.19 (GPU)
Ours 4.89
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nie, T.; Wang, X.; Liu, H.; Li, M.; Nong, S.; Yuan, H.; Zhao, Y.; Huang, L. Enhancement and Noise Suppression of Single Low-Light Grayscale Images. Remote Sens. 2022, 14, 3398. https://doi.org/10.3390/rs14143398

AMA Style

Nie T, Wang X, Liu H, Li M, Nong S, Yuan H, Zhao Y, Huang L. Enhancement and Noise Suppression of Single Low-Light Grayscale Images. Remote Sensing. 2022; 14(14):3398. https://doi.org/10.3390/rs14143398

Chicago/Turabian Style

Nie, Ting, Xiaofeng Wang, Hongxing Liu, Mingxuan Li, Shenkai Nong, Hangfei Yuan, Yuchen Zhao, and Liang Huang. 2022. "Enhancement and Noise Suppression of Single Low-Light Grayscale Images" Remote Sensing 14, no. 14: 3398. https://doi.org/10.3390/rs14143398

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop