Next Article in Journal
Interplay between Spacetime Curvature, Speed of Light and Quantum Deformations of Relativistic Symmetries
Previous Article in Journal
The Predictive Power of Transition Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Dedusting and Enhancement of Top-Coal Caving Face via Single-Channel Retinex-Based Method with Frequency Domain Prior Information

School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(11), 2097; https://doi.org/10.3390/sym13112097
Submission received: 11 September 2021 / Revised: 19 October 2021 / Accepted: 20 October 2021 / Published: 5 November 2021

Abstract

:
Affected by the uneven concentration of coal dust and low illumination, most of the images captured in the top-coal caving face have low definition, high haze and serious noise. In order to improve the visual effect of underground images captured in the top-coal caving face, a novel single-channel Retinex dedusting algorithm with frequency domain prior information is proposed to solve the problem that Retinex defogging algorithm cannot effectively defog and denoise, simultaneously, while preserving image details. Our work is inspired by the simple and intuitive observation that the low frequency component of dust-free image will be amplified in the symmetrical spectrum after adding dusts. A single-channel multiscale Retinex algorithm with color restoration (MSRCR) in YIQ space is proposed to restore the foggy approximate component in wavelet domain. After that the multiscale convolution enhancement and fast non-local means (FNLM) filter are used to minimize noise of detail components while retaining sufficient details. Finally, a dust-free image is reconstructed to the spatial domain and the color is restored by white balance. By comparing with the state-of-the-art image dedusting and defogging algorithms, the experimental results have shown that the proposed algorithm has higher contrast and visibility in both subjective and objective analysis while retaining sufficient details.

1. Introduction

Along with the rapid development of intelligent mining technology integrating the coal industry with big data, image and video technology have come to be widely used in all aspects of mining. However, the complex underground environment and insufficient lighting conditions—especially at the top-coal caving face, where smog-like weather easily forms due to the influence of coal falling over a large area with the help of gravity—seriously degrade the captured image and bring many difficulties to the subsequent processing of the image [1]. For better visual effects, richer colors and sight of more detail information, it is necessary to apply image dedusting methods, which are one of the key technological areas of development necessary for the construction of intelligent mines.
In the past decade, many defogging approaches have been proposed to improve the image quality of natural haze scenes. A visual comparison of foggy outdoor images and defogged images is shown in Figure 1. It can be clearly seen that the defogged images (Figure 1b) provide richer, more detail information and have a better visual effect than the foggy images with a gray layer (Figure 1a). As a part of image pre-processing, the defogging algorithm of physical-based and enhancement-based restoration methods has always formed a mainstream area of research in computer vision studies [2,3]. The physical-based restoration method solves the inverse process of image degradation by using an atmospheric scattering model, and thus obtains a clear image [4]. The process can be divided into four categories: depth information-based [5,6], light polarization-based [7,8], prior information-based [4,9,10,11] and deep learning [12,13,14,15]. Although restoration methods based on the physical model have achieved remarkable effect when it comes to natural haze removal, these have limited application at the top-coal caving face due to the conditions of low light, high noise and random and uneven distribution of dust concentration. In short, the defogging method based on the physical model easily fails in this environment, with low contrast or a need for physical equipment, which is inconvenient in practical application [3].
The defogging method based on image enhancement can simply and effectively improve the contrast of an image, which is suitable for various scenes, and removes the dependence on physical equipment. The process mainly includes histogram equalization [16], a smoothing filter [17] and Retinex enhancement [18,19,20,21]. Histogram equalization can only enhance the image contrast; it cannot enhance the details or effectively remove noise. The smoothing filter can simultaneously enhance the image and noise when the original image is complex and noisy. Retinex algorithms have the characteristic of improving image brightness, which makes them ideal for effective image enhancement and defogging [22]. Use of these algorithms may be a suitable and effective method for dust removal and enhancement in a low-light environment such as that of the top-coal caving face. However, they cannot effectively defog and denoise an image, simultaneously, while preserving the image’s details.
The key to dust removal with images of the top-coal caving face is to solve the contradiction between haze removal, contrast enhancement, noise suppression and detail retention. Hence, we propose a joint dedusting and enhancement method via a single-channel multiscale Retinex algorithm with color restoration (SC_MSRCR) with frequency domain prior information, to simultaneously remove dust and enhance the details and contrast. Firstly, the original image is decomposed into a foggy approximate component and three noisy detail components by wavelet transform, and the SC_MSRCR algorithm is used on the foggy approximate component. For each noisy detail component, a multiscale convolution enhancement method is firstly processed to enhance the image detail, and then a parameter-free fast non-local means (FNLM) method is applied to eliminate the noise, to achieve the purpose of suppressing the fog component and noise in the dust image, while preserving the details and enhancing the contrast. After that, adaptive contrast enhancement and white balance are applied to the reconstructed image to improve the visual effect and restore the real color of the dust-free image. Finally, a structure extraction model based on the relative total variation is applied to more intuitively verify the effect of the image dedusting and detail preservation.
The main contributions of this paper can be summarized as follows.
We discovered that the low-frequency component of dust-free images will be amplified after adding dust.
We proposed a SC_MSRCR algorithm with frequency domain prior information, to simultaneously remove dust and enhance details in the complex environment of a top-coal caving face.
Our method performed well on real images of the top-coal caving face when compared to state-of-the-art algorithms.
This paper is organized as follows. In Section 2, we review state-of-the-art research on image defogging enhancement for underground coal mines and assess the advantages and limitations of Retinex-based defogging work for the top-coal caving face. The proposed method for joint dedusting and enhancement will be detailed in Section 3. The detailed experimental results and analysis are presented in Section 4. Finally, Section 5 provides conclusions and suggestions for future exploratory work.

2. Related Works

Image denoising, defogging, dedusting and enhancement technology make up an important part of complex image recognition and analysis, especially for images of underground coal mines. Hua and Jiang [23] proposed a denoising approach for underground coal mines based on the CIELab color space. However, this method cannot effectively solve the problem of smog-like weather at the top-coal caving face. Shang [24] utilized Gaussian white noise to blur the coal mine image and restore the image by a K-fold cross-validation BP neural network, but did not consider that the actual environment is much more complex than Gaussian white noise. Yu et al. [25] verified that histogram equalization is a better method with which to enhance a low-light image of an underground coal mine, compared with direct contrast enhancement or the anti-sharpening mask method. But the result was only for gray image enhancement. Wu et al. [1] combined dark primary color prior information and CLAHE algorithm to enhance a foggy roadway image in a coal mine. This algorithm improved the image contrast, but also increased the image noise. A coal mining face image enhancement method based on the combination of bilateral filtering and single-scale Retinex was proposed in 2017 [20]. This method could effectively enhance the image and reduce the noise, but it could not effectively remove the foggy components, resulting in serious loss of detail.
Liu et al. [26] found that haze is generally distributed in the low-frequency spectrum of multiscale wavelet decomposition, and solved the image dehazing and denoising problem by using the dark channel model. This method has a good effect on dehazing and denoising for images with natural brightness. Unfortunately, the enhancement of visibility and contrast is not obvious for low-illumination images. Retinex [27] is an enhancement method based on color constancy, among which MSRCR and related improved algorithms are the frontier research in the field of weak-light enhancement [18,28,29]. Galdran et al. [22] demonstrated the defogging characteristics of the Retinex enhancement method by means of a formula in 2018. Li et al. [30] fused dark channel prior information and the Retinex method to enhance the defogging of underwater sea cucumber images in the same year. After that, Zhou et al. [2] and Zhang et al. [3] proposed a Retinex-based laplacian pyramid method and an MSRCR multi-channel convolution method for image defogging, respectively. Liu et al. [31] combined Retinex decomposition and dark channel prior information for defogging enhancement of arial images. However, image dedusting is not a simple defogging process. It is especially relevant in complex environments with low illumination, high noise and uneven smog-like weather, such as the top-coal caving face. So far, most of the methods with prior information are based on physical models, while few are based on image enhancement.

3. Joint Dedusting and Enhancement Method

In this section, we propose a joint dedusting and enhancement method for the top-coal caving face in the wavelet domain. The overall framework and four core processes of the proposed method are shown in Figure 2.
This mainly includes haze removal in the YIQ space by SC_MSRCR, image denoising with detail preservation by combining multiscale convolution enhancement and FNLM filtering, visual effect enhancement and real color restoration by adaptive contrast enhancement and white balance, and the verification of the dedusting effect by the structure extraction model based on the relative total variation and other objective evaluations.

3.1. Image Analysis in Wavelet Domain

According to the characteristics of natural haze, most of the existing image defogging methods are implemented in the spatial domain. It has been verified that natural haze is usually distributed in lower-frequency components of images [26]. So, we analyzed the spectra of dust images and dust-free images by Fourier transform. Representative observations are shown in Figure 3. It is clear from the symmetrical spectrum that the dust images always deliver more low-frequency spectra than dust-free images. That is to say, the smog-like weather of the top-coal caving face also resides in the low-frequency component, which can be utilized as the prior information for dust removal.
Two-dimensional wavelet transform is an effective tool with which to analyze the frequency domain information of an image [32]. The original image is decomposed into one foggy approximate component ( ϕ LL ) and three noisy detail components ( ψ HL , ψ LH and ψ HH ) by a Sym4 wavelet, which is defined as in Equation (1)
ϕ LL ( x , y ) = ϕ ( x ) ϕ ( y ) , ψ HL ( x , y ) = ψ ( x ) ϕ ( y ) ψ LH ( x , y ) = ϕ ( x ) ψ ( y ) , ψ HH ( x , y ) = ψ ( x ) ψ ( y )
where ϕ LL , ψ HL , ψ LH and ψ HH denote the approximate component in a low frequency, the horizontal detail component, the vertical detail component and the diagonal detail component in a high frequency, respectively, x and y are the coordinate indexes of the pixels in the image and ϕ and ψ denote the scaling function and wavelet function of one-dimensional wavelet decomposition, respectively.
At the output side, the enhanced images of the four subgraphs were reconstructed with wavelet inverse transform.

3.2. Smog-Like Weather Removal in Approximate Component

As discussed in Section 3.1, since smog-like weather is generally distributed in a low frequency, the popular MSRCR algorithm based on Retinex theory [27] is applied to eliminate haze. Retinex based on the color constant theory represents the original image S ( x , y ) as the product of an illumination image L ( x , y ) and reflection image R ( x , y ) , as shown in Equation (2)
S ( x , y ) = L ( x , y ) R ( x . y )
where x and y are the coordinate indexes of the pixels in the image. The purpose of the single-scale Retinex (SSR) is to estimate the illumination by Gaussian convolution, and then remove the illumination to obtain the reflection component as follows (Equations (3) and (4)) [3]
log ( R n ( x , y ) ) = log ( S n ( x , y ) ) log ( S n ( x , y ) G ( x , y ) )
G ( x , y ) = 1 2 π σ 2 exp ( x 2 y 2 / 2 σ 2 )
where n is a channel of the image from the color space. From (3) and (4), we can derive the general Equation (5) of multiscale Retinex (MSR) [33]
R M S R n ( x , y ) = m = 1 M w m log ( S n ( x , y ) ) log ( L n ( x , y ) G m ( x , y ) )
where M is the number of scales (M = 3 here, σ = 125 , 250 , 500 ), w m is the weight corresponding to the m t h scale and G m ( x , y ) is the Gaussian kernel function corresponding to the m t h scale.
Considering the color recovery advantage of the MSRCR algorithm, the mean value ( M e a n ), standard deviation ( S t d ) and weight parameters ( W ) are introduced to adjust the chromaticity based on the GIMP image enhancement process [34]. The calculation process is shown in Equations (6)–(8)
R M S R C R n ( x , y ) = C n ( x , y ) R M S R n ( x , y )
M I N = M e a n ( R M S R n ) W S t d ( R M S R n ) M A X = M e a n ( R M S R n ) + W S t d ( R M S R n )
R M S R C R n ( x , y ) = n ( R M S R n M I N ) / ( M A X M I N ) ( 255 0 ) = n 255 ( R M S R n M e a n ( R M S R n ) + W S t d ( R M S R n ) ) 2 W S t d ( R M S R n )
where C n ( x , y ) is the color recovery factor and the value of W is set to three by experimentation. It is worth noting that the value range of pixels, i.e., the overflow judgment, is added in Equation (9) for Equation (8).
R M S R C R n ( x , y ) = 0 R M S R C R n ( x , y ) < 0 R M S R C R n ( x , y ) 0 R M S R C R n ( x , y ) 255 255 R M S R C R n ( x , y ) > 255
Though the MSRCR algorithm has better enhancement performance, it introduces too many adjustable parameters, which increases the complexity of the algorithm, and is not conducive to application for industrial automation. Considering the lighting complexity and enhanced effect, a single-channel MSRCR algorithm (SC_MSRCR) in the YIQ space is proposed in this paper (n = 1). Haze mainly affects the brightness information in the image, so we first convert the original color image from RGB color space to the YIQ space, and only remove haze in the Y channel that represents brightness information to get the enhanced component ( ϕ LL ).

3.3. Noise Removal with Detail Preservation in Detail Components

The smog-like weather is in a low frequency, while noise and details are both in a high frequency. To remove the noise with detail preservation, multiscale Gaussian convolution detail enhancement the parameter-free FNLM filtering [35] are applied to successively eliminate the noise. Representative observations are shown in Figure 4. It can clearly be found that the detail information can be preserved better by means of multiscale Gaussian convolution detail enhancement before denoising.
First, three different Gaussian kernels are convoluted with the input image ( ψ HL , ψ LH and ψ HH , respectively) to obtain three different scale fuzzy images using Equation (10)
ψ HL 1 = g 1 ψ HL , ψ HL 2 = g 2 ψ HL , ψ HL 3 = g 3 ψ HL ψ LH 1 = g 1 ψ LH , ψ LH 2 = g 2 ψ LH , ψ LH 3 = g 3 ψ LH ψ HH 1 = g 1 ψ HH , ψ HH 2 = g 2 ψ HH , ψ HH 3 = g 3 ψ HH
where g 1 , g 2 and g 3 are the Gaussian kernels with the standard deviations σ 1 = 1.0 , σ 2 = 2.0 and σ 3 = 3.0 , respectively. Then, fine details ( ψ HL 1 , ψ LH 1 and ψ HH 1 ), medium details ( ψ HL 2 , ψ LH 2 and ψ HH 2 ) and coarse details ( ψ HL 3 , ψ LH 3 and ψ HH 3 ) are extracted by subtraction, as depicted in Equation (11).
ψ HL 1 = ψ HL - ψ HL 1 , ψ HL 2 = ψ HL 1 ψ HL 2 , ψ HL 3 = ψ HL 2 ψ HL 3 ψ LH 1 = ψ LH - ψ LH 1 , ψ LH 2 = ψ LH 1 ψ LH 2 , ψ LH 3 = ψ LH 2 ψ LH 3 ψ HH 1 = ψ HH - ψ HH 1 , ψ HH 2 = ψ HH 1 ψ HH 2 , ψ HH 3 = ψ HH 2 ψ HH 3
After that, combined with Equation (11), the overall detail enhancement is expressed as Equation (12).
ψ HL = ( 1 W 1 s i g n ( ψ HL 1 ) ) ψ HL 1 + W 2 ψ HL 2 + W 3 ψ HL 3 ψ LH = ( 1 W 1 s i g n ( ψ LH 1 ) ) ψ LH 1 + W 2 ψ LH 2 + W 3 ψ LH 3 ψ HH = ( 1 W 1 s i g n ( ψ HH 1 ) ) ψ HH 1 + W 2 ψ HH 2 + W 3 ψ HH 3
where W 1 , W 2 , and W 3 are fixed to 0.5, 0.25, and 0.25, respectively. Finally, improved parameter-free FNLM filtering with superior denoising performance, as developed by Froment [35], is applied to remove the noise. The method can be described as follows:
ψ HL = F a s t N L m e a n s   ψ HL , d s , D s , h ψ LH = F a s t N L m e a n s   ψ LH , d s , D s , h ψ HH = F a s t N L m e a n s   ψ HH , d s , D s , h
where d s , D s , and h denote the neighborhood window radius, search window radius and Gaussian function smoothing parameter, respectively. To balance the computation and performance, they are set as 2, 5, and 10 in our experiment, respectively.

3.4. Visual Effect Enhancement and Color Restoration

The components obtained ( ϕ LL , ψ HL , ψ LH and ψ HH ) in Section 3.2 and Section 3.3 are reconstructed by inverse wavelet transform. To improve the visual effect and eliminate the influence of illumination on the color of the dedusted image, adaptive contrast enhancement and white balance are introduced to correct the color casts caused by illumination. This is defined as follows:
G r a y ¯ = 1 / 3 ( R ¯ + G ¯ + B ¯ ) k r = G r a y ¯ / R , k g = G r a y ¯ / G , k b = G r a y ¯ / B C ( R ) = C ( R ) k r , C ( G ) = C ( G ) k g , C ( B ) = C ( B ) k b
where R ¯ , G ¯ and B ¯ are the gray means of the R, G and B channel in the image, respectively, while C denotes the pixels in the image.

3.5. Image Acquisition and Evaluation

In our experiments, we collected the video datasets from 8105 working faces of the Zhuxianzhuang Coal Mine, Huaibei Mining Group. To test the proposed method, 43 dust images were randomly captured at a resolution of 1280 × 720 pixels from five coal mine videos taken by five explosion-proof cameras (IPG-85HE20PY-S) and saved in JPG format. These included five example images and 38 verification images captured of different scenes or at different times for the same scene. These images were disturbed by different degrees of smog weather, low illumination and serious noise. Representative images are shown in Figure 5. Considering the operation speed, the image size was reset to 540 × 320 pixels in the experiment. The proposed method was implemented on a PC Windows 10 platform with an Intel(R) Core(TM) i7-6700HQ CPU@ 2.60 GHz processor and 16GB RAM, and the coding language was MATLAB.
From the different perspectives of the spatial domain and frequency domain, we analyzed subjective results such as the color contrast in the enhanced color images, spectrum images and structural information in gray images, to determine whether the improved image was more suitable for human vision analysis and processing. For gray images with high levels of haze and complex textures, the overall structural features are the primary data of human perception. So, the extracted structural images, based on relative total variation [36], were used as example images to evaluate the quality of the dedusted gray images. In addition, since there was no perfect dedusted image as a reference, objective quality assessment was also necessary, though it was not always consistent with human vision. Three well-known quantitative metrics—image information entropy (IE) [3], standard deviation (STD) [18] and average gradient (AG) [2]—were selected to evaluate the dedusting performance, which was combined with subjective analysis. The calculations are shown below.
IE reflects the average amount of information, which is described as:
IE = i = 0 255 p ( i ) log 2 p ( i )
where i is the pixel value and p ( i ) represents the probability that the pixel value i appears in the image. Theoretically, the larger the IE value, the richer the color information contained in the image.
For an input image f ( x , y ) , STD represents the overall contrast of the image, which is described as:
μ = 1 M N x = 1 M y = 1 N f ( x , y ) S T D = 1 M N x = 1 M y = 1 N ( f ( x , y ) μ ) 2
where M , N and μ represent the width, height and average intensity value of the input image, respectively. Theoretically, the larger the STD value, the better the overall contrast of the image.
AG reflects small changes in the details of the image, which are described as:
A G = 1 ( M 1 ) ( N 1 ) i = 1 M 1 j = 1 N 1 ( f ( x + 1 , y ) f ( x , y ) ) 2 + ( f ( x , y + 1 ) f ( x , y ) ) 2 2
where f ( x + 1 , y ) and f ( x , y + 1 ) are the differences of f ( x , y ) along the x and y directions, respectively. Theoretically, the larger the AG, the more detail information is obtained in the dedusted image.

4. Experimental Results

We compared the proposed algorithm with ten state-of-the-art algorithms, including SSR, MSR, MSRCR and SC_MSRCR methods related to this paper, the dedusting enhancement methods for underground coal mines in [1,20,25] and the defogging methods based on prior information in [4,9,11]. We chose the suggested parameters in the source codes available of the respective authors’ websites to test the color and gray images.

4.1. Color Images

Typical dedusting performances are shown in Figure 6 and Figure 7. As can be seen in every green rectangle in Figure 6, our method makes the objects hidden in the dark clear enough without overexposure, which confirms the advantages of our method for dust removal and color recovery. Due to the limited compression of the picture size, the detail difference is best compared after enlargement, such as to identify an object like a hoe. Although the SSR and MSR algorithms were able to augment the image contrast and improve the scene’s visibility, the color of the recovered scenes was often undersaturated (e.g., the framed object in Figure 6b,c), and the scene information recovery was not significant in the frequency domain (Figure 7b,c). Due to the influence of artificial illumination, the MSRCR algorithm caused excessive global color recovery in the RGB channel (Figure 6d). However, as Figure 6e shows, the SC_MSRCR algorithm can effectively overcome that shortcoming. The above methods achieved a good dust removal effect in terms of vision, but without considering noise suppression, meaning the overall image enhancement effect in the frequency domain was not ideal.
By combining dark channel prior information and the CLAHE algorithm, Wu and Zhang [1] enhanced the simulated environment of a foggy roadway in a coal mine. However, this method tended to underestimate the light and create halos (e.g., the framed hoe in Figure 6f). Si et al. [20] combined SSR with bilateral filtering to eliminate the noise and improve the edge information of an image of a coal mining face. Unfortunately, this method could not remove the smog-like weather in the top-coal caving face (Figure 6g). Yu et al. [25] proved that the CLAHE algorithm is an effective method for mine image enhancement. Nevertheless, the dust removal results obtained by this method were not visually convincing (e.g., the framed object in Figure 6h). Due to the lack of statistical prior information pertaining to images of relevant scenes, these methods are difficult to apply to image enhancement in a special environment, such as a top-coal caving face.
Recent image dehazing works based on prior information have achieved satisfactory results, especially those with dark channel prior information [4]. This prior information works very well, except in dark scenes where there is a clear lack of contrast, as seen in the area framed in green in Figure 6i. The method of [11] assumes that each color cluster in the clear outdoor natural images becomes a line in the RGB space, which produces redundant noise in the mine image with unbalanced color (e.g., the area framed in Figure 6j). The color attenuation prior information assumes that the depth of the scene is positively correlated with the concentration of the haze [9]. However, the scene in image restoration is often overexposed (e.g., the area framed in Figure 6k). The main reason for this is that this assumption is not physically valid in the application for mine dust images.
Compared with the results of the ten algorithms, our proposed approach has a better visual effect than these competing techniques. As displayed in Figure 6m, the workers, hoe and background are clear, and the overall contrast of the image is moderately enhanced while maintaining the appearance of the original color. Referring to image processing in the frequency domain, the dark channel prior information algorithm yielded highly comparative results with others (see Figure 7i). Nevertheless, high-frequency details were lost by this method for the top-coal caving face. It is to be noted that the dust-free image obtained by the proposed approach exhibited a more high-frequency spectrum and less noise while retaining details (see Figure 7l).
In terms of the data analysis of Table 1, the objective evaluation indexes of the eleven algorithms obtained in this paper are all greater than that of the dust image. But the overexposed image does not accord with the reality and human vision. As a whole, the proposed method is superior to other algorithms of image dedusting from the point of view of objective metrics combined with subjective analysis. This indicates that this algorithm has obvious advantages in terms of the details, color and contrast of the images.

4.2. Gray Images

The dust removal effect of a color image can easily be identified by means of vision, while most gray images of a top-coal caving face are difficult to compare through human vision, such as in Figure 8d,l.
To evaluate the algorithm more intuitively through subjective analysis, extracted structural images are shown in Figure 9. As we can see, the obtained structural images have better visual features when the algorithm has better dedusting properties. The characteristics of dust images mostly include bad contrast, a blurry structure and high level of noise in Figure 8a and Figure 9a. The structure of dedusting images obtained by different dedusting algorithms will be improved to different degrees. The experiments showed that compared with the other ten algorithms, our method possessed a better dedusting ability and kept more details. As well as this, it had a greater ability to restrain noise and light interference, as seen in the areas framed in green in Figure 9l.
To ensure the uniform and fair evaluation of images, we also objectively analyzed the experimental results of the methods related to this paper: dedusting enhancement methods of underground coal mines, state-of-the-art defogging methods based on prior information and our method. The three quantitative metrics of the dedusted images are given in Table 2.
As shown in Table 2, the three quantitative metrics were higher for all of the methods than for the original images. Although the maximum IE of an image was obtained by Si et al. [20], this method had a poor dedusting effect (see Figure 9g). The direct reason for this problem is that IE is determined by the richness of the colors, which is beneficial to the description of the gray values of a gray image with haze residue. The image STD represents the dispersion of the pixel gray value relative to the mean value. The method of Yu et al. [25] had higher standard deviation, but was poor in terms of the defogging effect, because the local exposure of the image enlarged the pixel mean and standard deviation (see Figure 9h), which did not accord with the reality and human vision. In addition, the standard deviation of our method was better than that of other state-of-the-art methods. AG reflects the change of detail contrast and texture in an image; Table 2 and Figure 9l show that the proposed method was superior when compared to the other methods in terms of the aspects of dust removal and detail preservation.

4.3. Quantitative Comparison

As shown in Figure 10, images of different scenes were tested to further verify the robustness of the proposed algorithm; they were processed by the five methods of MSRCR, Berman et al. [11], Zhu et al. [9], Wu et al. [1] and ours. The IE, STD and AG of the images processed by the different methods are plotted in Figure 11.
In Figure 10, the brightness and sharpness of the images processed by MSRCR and Wu et al. [1] can be seen to have improved to some extent, but the colors of the recovered scenes were often undersaturated (e.g., the framed object in Figure 10). In addition, the defogging effect of the method proposed by Wu et al. [1] was not significant because of the equalization effect of the CLAHE algorithm in light-gathering scenes (e.g., Figure 10A). Note that the nonlocal priori algorithm proposed by Berman et al. achieved convincing results, but easily led to serious exposure in some local regions (e.g., top of coal pile).
The algorithm proposed by Zhu et al. [9] did not completely remove dust (e.g., the framed object in Figure 10d), mainly due to the difficulty of training samples in parameter learning. Combined with Figure 11, it can be seen that the proposed method produced quite visually compelling results. The objective evaluation index proved that the proposed algorithm had obvious advantages in terms of the image detail, color and contrast of the images.
To clearly show the outperformance and stability of the proposed method, and to obtain parameter statistics, 38 verification images were captured from surveillance videos of the Zhuxianzhuang Coal Mine, Huaibei Mining Group. As shown in Figure 5, these images suffered from interference of smog-like weather, low illumination and serious noise, to varying extents, and they were also processed by the five methods. The IE, STD and AG values of the images processed by the different methods are plotted in Figure 12, Figure 13 and Figure 14, respectively.
As can be seen in Figure 12, Figure 13 and Figure 14, the mean values of IE (11.09), STD (68.73) and AG (7.39), based on the proposed method, were larger than those of MSRCR (10.18, 53.31, 6.22), Berman et al. (10.12, 62.29, 5.92), Zhu et al. (9.04, 59.02, 5.24) and Wu et al. (10.36, 63.9, 5.93). Therefore, as shown in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14, Table 1 and Table 2, the proposed approach can not only significantly improve the visibility of dust scenes at the top-coal caving face, but also suppress the noise while retaining greater detail. Experimental results and comparisons with other relative work show the superiority of the proposed algorithm.
Since the proposed approach introduces wavelet decomposition and nonlocal mean filtering to reduce the impacts of dust and noise, the computational load is much higher than for other methods. Fortunately, the average processing time obtained by the proposed approach (1.94 s) for processing 540 × 320 images was less those of Berman et al. (5.09 s) and Wu et al. (3.55 s). These two approaches chose to estimate the transmission maps through the whole image region, which leads to a long running time. Although the operation time of the method proposed by Zhu et al. (1.73 s) is relatively short, the dust removal effect is the worst. On the whole, our method outperforms other classical algorithms in subjective analysis and objective analysis, and the processing time is acceptable. The experimental results further demonstrate the effectiveness of our method when applied to image dedusting for the top-coal caving face.

5. Conclusions

In this paper, we proposed a novel single-channel Retinex algorithm with frequency domain prior information for image dedusting at the top-coal caving face. The method is based on the assumption that the smog-like weather is typically distributed in the low-frequency symmetric spectrum, while noise and image details are distributed in the high-frequency symmetric spectrum. Accordingly, a single-channel MSRCR algorithm in the YIQ space is presented to restore the foggy approximate component in the wavelet domain, and multiscale convolution enhancement and the FNLM filter are used to minimize the noise of detail components while retaining sufficient detail. Consequently, experiments on the proposed algorithm verified that the visibility of the degraded scenes could be significantly increased with satisfactory detail information and realistic color appearance. As an effective image preprocessing method, the proposed algorithm is suitable to be applied to complex coal mining face scenes, as we determined by comparing it with state-of-the-art image dedusting and defogging algorithms in both subjective and objective analysis. However, our method has two shortcomings. Firstly, most of the parameters are set according to the experiment, meaning they can be applied to specific scenes but cannot adaptively adjust to obtain a satisfactory dedusting effect for more scenes. Secondly, the introduction of wavelet decomposition and nonlocal mean filtering increase the computational complexity of the algorithm, which is time-consuming in practical application. The main aim of future research should thus be to improve the operating efficiency of the algorithm, to maximize its success in industrial applications.

Author Contributions

Conceptualization, C.F. and F.L.; methodology, C.F.; software, X.Z. and G.Z.; validation, C.F., F.L. and X.Z.; formal analysis, C.F.; investigation, X.Z.; resources, G.Z.; data curation, C.F. and F.L.; writing—original draft preparation, C.F.; writing—review and editing, G.Z.; visualization, F.L.; supervision, G.Z.; project administration, C.F.; funding acquisition, G.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by [National Natural Science Foundation of China] grant number [U1704242].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of these data. Data was obtained from the Zhuxianzhuang Coal Mine, Huaibei Mining Group and are available from the authors with the permission of the Zhuxianzhuang Coal Mine, Huaibei Mining Group.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (grant no. U1704242). The authors thank the Engineering Research Center of the Caving and Coal Mining Industry, of China’s University of Mining and Technology (Beijing), for its help.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, D.; Zhang, S. Research on image enhancement algorithm of coal mine dust. In Proceedings of the 2018 International Conference on Sensor Networks and Signal Processing (SNSP), Xian, China, 28–31 October 2018; pp. 261–265. [Google Scholar]
  2. Zhou, J.; Zhang, D.; Zou, P.; Zhang, W.; Zhang, W. Retinex-based laplacian pyramid method for image defogging. IEEE Access 2019, 7, 122459–122472. [Google Scholar] [CrossRef]
  3. Zhang, W.; Dong, L.; Pan, X.; Zhou, J.; Qin, L.; Xu, W. Single image defogging based on multi-channel convolutional MSRCR. IEEE Access 2019, 7, 72492–72504. [Google Scholar] [CrossRef]
  4. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar]
  5. Wang, W.; Chang, F.; Ji, T.; Wu, X. A fast single-image dehazing method based on a physical model and gray projection. IEEE Access 2018, 6, 5641–5653. [Google Scholar] [CrossRef]
  6. Jiang, B.; Zhang, W.; Zhao, J.; Ru, Y.; Liu, M.; Ma, X.; Chen, X.; Meng, H. Gray-scale image dehazing guided by scene depth information. Math. Probl. Eng. 2016, 2016, 1–10. [Google Scholar] [CrossRef] [Green Version]
  7. Shen, L.; Zhao, Y.; Peng, Q.; Chan, J.C.-W.; Kong, S.G. An iterative image dehazing method with polarization. IEEE Trans. Multimedia 2019, 21, 1093–1107. [Google Scholar] [CrossRef]
  8. Schechner, Y.; Narasimhan, S.; Nayar, S. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1, pp. 325–332. [Google Scholar]
  9. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [PubMed] [Green Version]
  10. Ju, M.; Ding, C.; Guo, Y.J.; Zhang, D. IDGCP: Image dehazing based on gamma correction prior. IEEE Trans. Image Process. 2020, 29, 3104–3118. [Google Scholar] [CrossRef]
  11. Berman, D.; Treibitz, T.; Avidan, S. Non-local image dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1674–1682. [Google Scholar]
  12. Zhang, H.; Patel, V.M. Densely connected pyramid dehazing network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3194–3203. [Google Scholar]
  13. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks with holistic edges. Int. J. Comput. Vis. 2020, 128, 240–259. [Google Scholar] [CrossRef]
  14. Ren, W.; Ma, L.; Zhang, J.; Pan, J.; Cao, X.; Liu, W.; Yang, M.-H. Gated fusion network for single image dehazing. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3253–3261. [Google Scholar]
  15. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. AOD-Net: All-in-one dehazing network. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4780–4788. [Google Scholar]
  16. Shakeri, M.; Dezfoulian, M.; Khotanlou, H.; Barati, A.; Masoumi, Y. Image contrast enhancement using fuzzy clustering with adaptive cluster parameter and sub-histogram equalization. Digit. Signal Process. 2017, 62, 224–237. [Google Scholar] [CrossRef]
  17. Xiao, L.; Li, C.; Wu, Z.; Wang, T. An enhancement method for X-ray image via fuzzy noise removal and homomorphic filtering. Neurocomputing 2016, 195, 56–64. [Google Scholar] [CrossRef]
  18. Ye, X.; Wu, G.; Huang, L.; Fan, F.; Zhang, Y. Image enhancement for inspection of cable images based on retinex theory and fuzzy enhancement method in wavelet domain. Symmetry 2018, 10, 570. [Google Scholar] [CrossRef] [Green Version]
  19. Xu, J.; Hou, Y.; Ren, D.; Liu, L.; Zhu, F.; Yu, M.; Wang, H.; Shao, L. STAR: A structure and texture aware retinex model. IEEE Trans. Image Process. 2020, 29, 5022–5037. [Google Scholar] [CrossRef] [Green Version]
  20. Si, L.; Wang, Z.; Xu, R.; Tan, C.; Liu, X.; Xu, J. Image enhancement for surveillance video of coal mining face based on single-scale retinex algorithm combined with bilateral filtering. Symmetry 2017, 9, 93. [Google Scholar] [CrossRef] [Green Version]
  21. Sadia, H.; Azeem, F.; Ullah, H.; Mahmood, Z.; Khattak, S.; Khan, G.Z. Color image enhancement using multiscale retinex with guided filter. In Proceedings of the 2018 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 17–19 December 2018; pp. 82–87. [Google Scholar]
  22. Galdran, A.; Bria, A.; Alvarez-Gila, A.; Vazquez-Corral, J.; Bertalmio, M. On the duality between retinex and image dehazing. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8212–8221. [Google Scholar]
  23. Hua, G.; Jiang, D. A new method of image denoising for underground coal mine based on the visual characteristics. J. Appl. Math. 2014, 2014, 1–7. [Google Scholar] [CrossRef]
  24. Shang, C. A novel analyzing method to coal mine image restoration. In Proceedings of the 2015 Asia-Pacific Energy Equipment Engineering Research Conference, Zhuhai, China, 13–14 June 2015; pp. 285–288. [Google Scholar]
  25. Yu, C.; Rui, G.; Li-Jie, D. Study of image enhancement algorithms in coal mine. In Proceedings of the 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 27–28 August 2016; pp. 383–386. [Google Scholar]
  26. Liu, X.; Zhang, H.; Cheung, Y.-M.; You, X.; Tang, Y.Y. Efficient single image dehazing and denoising: An efficient multi-scale correlated wavelet approach. Comput. Vis. Image Underst. 2017, 162, 23–33. [Google Scholar] [CrossRef]
  27. Jobson, D.; Rahman, Z.; Woodell, G. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
  28. Li, M.; Liu, J.; Yang, W.; Guo, Z. Joint denoising and enhancement for low-light images via retinex model. In Digital TV and Wireless Multimedia Communication; Springer: Singapore, 2018; pp. 91–99. [Google Scholar]
  29. Ren, X.; Li, M.; Cheng, W.-H.; Liu, J. Joint enhancement and denoising method via sequential decomposition. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018. [Google Scholar]
  30. Li, Z.; Li, G.; Niu, B.; Peng, F. Sea cucumber image dehazing method by fusion of retinex and dark channel. IFAC PapersOnLine 2018, 51, 796–801. [Google Scholar] [CrossRef]
  31. Liu, X.; Liu, C.; Lan, H.; Xie, L. Dehaze enhancement algorithm based on retinex theory for aerial images combined with dark channel. OALib 2020, 07, 1–12. [Google Scholar]
  32. Fu, F.; Liu, F. Wavelet-based retinex algorithm for unmanned aerial vehicle image defogging. In Proceedings of the 8th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 12–13 December 2015; pp. 426–430. [Google Scholar]
  33. Qin, Y.; Luo, F.; Li, M. A medical image enhancement method based on improved multi-scale retinex algorithm. J. Med Imaging Health Inform. 2020, 10, 152–157. [Google Scholar] [CrossRef]
  34. Fu, Q.; Jung, C.; Xu, K. Retinex-based perceptual contrast enhancement in images using luminance adaptation. IEEE Access 2018, 6, 61277–61286. [Google Scholar] [CrossRef]
  35. Froment, J. Parameter-free fast pixelwise non-local means denoising. Image Process. Line 2014, 4, 300–326. [Google Scholar] [CrossRef] [Green Version]
  36. Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure extraction from texture via relative total variation. ACM Trans. Graph. 2012, 31, 1–10. [Google Scholar] [CrossRef]
Figure 1. Instances of foggy and defogged images of natural haze scenes. (a) Two foggy images. (b) Defogged images by means of our method.
Figure 1. Instances of foggy and defogged images of natural haze scenes. (a) Two foggy images. (b) Defogged images by means of our method.
Symmetry 13 02097 g001
Figure 2. Framework of joint dedusting and enhancement algorithm.
Figure 2. Framework of joint dedusting and enhancement algorithm.
Symmetry 13 02097 g002
Figure 3. Spectrum comparison of dust image (below) and dust-free image (above).
Figure 3. Spectrum comparison of dust image (below) and dust-free image (above).
Symmetry 13 02097 g003
Figure 4. Combined effect of multiscale convolution enhancement and fast FNLM denoising. (a) Original image with noise and texture. (b) Only denoising. (c) Denoising before detail enhancement. (d) Detail enhancement before denoising (ours).
Figure 4. Combined effect of multiscale convolution enhancement and fast FNLM denoising. (a) Original image with noise and texture. (b) Only denoising. (c) Denoising before detail enhancement. (d) Detail enhancement before denoising (ours).
Symmetry 13 02097 g004
Figure 5. Representative images in dataset. (a) Representative images of five different scenes. (b) Representative images captured at different times of same scene.
Figure 5. Representative images in dataset. (a) Representative images of five different scenes. (b) Representative images captured at different times of same scene.
Symmetry 13 02097 g005
Figure 6. Comparison of representative color images dedusting results. (a) Dust image. (b) Result image by SSR. (c) Result image by MSR. (d) Result image by MSRCR. (e) Result image by SC_MSRCR (proposed). (f) Wu et al.’s results [1]. (g) Si et al.’s results [20]. (h) Yu et al.’s results [25]. (i) He et al.’s results [4]. (j) Berman et al.’s results [11]. (k) Zhu et al.’s results [9]. (l) Our results.
Figure 6. Comparison of representative color images dedusting results. (a) Dust image. (b) Result image by SSR. (c) Result image by MSR. (d) Result image by MSRCR. (e) Result image by SC_MSRCR (proposed). (f) Wu et al.’s results [1]. (g) Si et al.’s results [20]. (h) Yu et al.’s results [25]. (i) He et al.’s results [4]. (j) Berman et al.’s results [11]. (k) Zhu et al.’s results [9]. (l) Our results.
Symmetry 13 02097 g006
Figure 7. Spectrum images corresponding to Figure 6. (a) Dust image. (b) Result image by SSR. (c) Result image by MSR. (d) Result image by MSRCR. (e) Result image by SC_MSRCR (proposed). (f) Wu et al.’s results [1]. (g) Si et al.’s results [20]. (h) Yu et al.’s results [25]. (i) He et al.’s results [4]. (j) Berman et al.’s results [11]. (k) Zhu et al.’s results [9]. (l) Our results.
Figure 7. Spectrum images corresponding to Figure 6. (a) Dust image. (b) Result image by SSR. (c) Result image by MSR. (d) Result image by MSRCR. (e) Result image by SC_MSRCR (proposed). (f) Wu et al.’s results [1]. (g) Si et al.’s results [20]. (h) Yu et al.’s results [25]. (i) He et al.’s results [4]. (j) Berman et al.’s results [11]. (k) Zhu et al.’s results [9]. (l) Our results.
Symmetry 13 02097 g007
Figure 8. Comparison of representative gray images dedusting results. (a) Dust image. (b) Result image by SSR. (c) Result image by MSR. (d) Result image by MSRCR. (e) Result image by SC_MSRCR (proposed). (f) Wu et al.’s results [1]. (g) Si et al.’s results [20]. (h) Yu et al.’s results [25]. (i) He et al.’s results [4]. (j) Berman et al.’s results [11]. (k) Zhu et al.’s results [9]. (l) Our results.
Figure 8. Comparison of representative gray images dedusting results. (a) Dust image. (b) Result image by SSR. (c) Result image by MSR. (d) Result image by MSRCR. (e) Result image by SC_MSRCR (proposed). (f) Wu et al.’s results [1]. (g) Si et al.’s results [20]. (h) Yu et al.’s results [25]. (i) He et al.’s results [4]. (j) Berman et al.’s results [11]. (k) Zhu et al.’s results [9]. (l) Our results.
Symmetry 13 02097 g008
Figure 9. Structural images corresponding to Figure 8. (a) Dust image. (b) Result image by SSR. (c) Result image by MSR. (d) Result image by MSRCR. (e) Result image by SC_MSRCR (proposed). (f) Wu et al.’s results [1]. (g) Si et al.’s results [20]. (h) Yu et al.’s results [25]. (i) He et al.’s results [4]. (j) Berman et al.’s results [11]. (k) Zhu et al.’s results [9]. (l) Our results.
Figure 9. Structural images corresponding to Figure 8. (a) Dust image. (b) Result image by SSR. (c) Result image by MSR. (d) Result image by MSRCR. (e) Result image by SC_MSRCR (proposed). (f) Wu et al.’s results [1]. (g) Si et al.’s results [20]. (h) Yu et al.’s results [25]. (i) He et al.’s results [4]. (j) Berman et al.’s results [11]. (k) Zhu et al.’s results [9]. (l) Our results.
Symmetry 13 02097 g009
Figure 10. Comparison of three typical images of dedusting results. (A)–(C) are three typical images in Figure 5. (a) Dust image. (b) Result image by MSRCR. (c) Berman et al.’s results. (d) Zhu et al.’s results. (e) Wu et al.’s results. (f) Ours.
Figure 10. Comparison of three typical images of dedusting results. (A)–(C) are three typical images in Figure 5. (a) Dust image. (b) Result image by MSRCR. (c) Berman et al.’s results. (d) Zhu et al.’s results. (e) Wu et al.’s results. (f) Ours.
Symmetry 13 02097 g010
Figure 11. Comparison of IE, STD and AG of the images in Figure 10. (A)–(C) are three typical images in Figure 5.
Figure 11. Comparison of IE, STD and AG of the images in Figure 10. (A)–(C) are three typical images in Figure 5.
Symmetry 13 02097 g011
Figure 12. Statistical analysis in terms of IE.
Figure 12. Statistical analysis in terms of IE.
Symmetry 13 02097 g012
Figure 13. Statistical analysis in terms of STD.
Figure 13. Statistical analysis in terms of STD.
Symmetry 13 02097 g013
Figure 14. Statistical analysis in terms of AG.
Figure 14. Statistical analysis in terms of AG.
Symmetry 13 02097 g014
Table 1. Objective results of dedusting images in Figure 6.
Table 1. Objective results of dedusting images in Figure 6.
MethodIESTDAGMethodIESTDAG
(a)9.745830.04935.6163(g)12.563034.27735.2193
(b)12.956852.14077.4134(h)12.754151.34855.7709
(c)12.937751.50417.3264(i)10.157431.06553.7548
(d)12.891445.5857.3484(j)12.168350.19337.2618
(e)12.153938.92666.0602(k)12.792059.81927.9734
(f)13.172549.4626.1751(l)13.349860.37048.5584
Table 2. Objective results of dedusting images in Figure 8.
Table 2. Objective results of dedusting images in Figure 8.
MethodIESTDAGMethodIESTDAG
(a)7.107136.83211.2237(g)10.014147.14951.5816
(b)9.206944.44571.9141(h)7.849264.7751.8164
(c)7.442345.06721.8816(i)6.101929.65871.3047
(d)7.408548.20551.6940(j)7.540847.90581.7299
(e)7.366044.04651.7667(k)7.533455.07082.0496
(f)7.232447.88822.1632(m)8.802660.72082.6943
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fu, C.; Lu, F.; Zhang, X.; Zhang, G. Joint Dedusting and Enhancement of Top-Coal Caving Face via Single-Channel Retinex-Based Method with Frequency Domain Prior Information. Symmetry 2021, 13, 2097. https://doi.org/10.3390/sym13112097

AMA Style

Fu C, Lu F, Zhang X, Zhang G. Joint Dedusting and Enhancement of Top-Coal Caving Face via Single-Channel Retinex-Based Method with Frequency Domain Prior Information. Symmetry. 2021; 13(11):2097. https://doi.org/10.3390/sym13112097

Chicago/Turabian Style

Fu, Chengcai, Fengli Lu, Xiaoxiao Zhang, and Guoying Zhang. 2021. "Joint Dedusting and Enhancement of Top-Coal Caving Face via Single-Channel Retinex-Based Method with Frequency Domain Prior Information" Symmetry 13, no. 11: 2097. https://doi.org/10.3390/sym13112097

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop