Next Article in Journal
Steven Weinberg’s Life for Physics
Previous Article in Journal
Dynamic Integral-Event-Triggered Control of Photovoltaic Microgrids with Multimodal Deception Attacks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Dark Channel Prior Method for Video Defogging and Its FPGA Implementation

1
School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin 644000, China
2
Intelligent Perception and Control Key Laboratory of Sichuan Province, Yibin 644000, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(6), 839; https://doi.org/10.3390/sym17060839
Submission received: 29 March 2025 / Revised: 21 May 2025 / Accepted: 26 May 2025 / Published: 27 May 2025
(This article belongs to the Section Engineering and Materials)

Abstract

In fog, rain, snow, haze, and other complex environments, environmental objects photographed by imaging equipment are prone to image blurring, contrast degradation, and other problems. The decline in image quality fails to satisfy the requirements of application scenarios such as video surveillance, satellite reconnaissance, and target tracking. Aiming at the shortcomings of the traditional dark channel prior algorithm in video defogging, this paper proposes a method to improve the guided filtering algorithm to refine the transmittance image and reduce the halo effect in the traditional algorithm. Meanwhile, a gamma correction method is proposed to recover the defogged image and enhance the image details in a low-light environment. The parallel symmetric pipeline design of the FPGA is used to improve the system’s overall stability. The improved dark channel prior algorithm is realized through the hardware–software co-design of ARM and the FPGA. Experiments show that this algorithm improves the Underwater Image Quality Measure (UIQM), Average Gradient (AG), and Information Entropy (IE) of the image, while the system is capable of stably processing video images with a resolution of 1280 × 720 @ 60 fps. By numerically analyzing the power consumption and resource usage at the board level, the power consumption on the FPGA is only 2.242 W, which puts the hardware circuit design in the category of low power consumption.

1. Introduction

Computer vision has been more widely used with the addition of artificial intelligence. However, in environments such as fog, haze, smoke, snow, etc., the scattering effect of fine particles in the air will contribute to a notable reduction in the quality of the acquired images, resulting in degradation phenomena such as image blurring, color distortion, etc. [1]. The serious deterioration in image quality will lead to its inability to meet automatic driving, satellite reconnaissance, target tracking, and other application scenarios. Defogging can significantly improve image visibility and correct the color deviation produced by the refraction of suspended particles. Image defogging algorithms are mainly classified into three categories [2,3]: physical model-based image restoration [4,5,6,7,8], non-physical model-based image enhancement [9,10,11], and deep learning model-based [12,13,14]. Currently, the dark channel prior (DCP) proposed by Kaiming He et al. [4] is still the dominant defogging algorithm, and the traditional algorithms using the dark channel prior are less effective on hardware. Therefore, the process of dark channel image, transmittance image, and image restoration needs to be continuously optimized, including the use of mean filtering, soft keying strategy, Gaussian filtering [15,16], guided filtering [10,12,17], and bilateral filtering [18]. In addition, in order to realize the real-time video image defogging algorithm, the traditional graphics processor (GPU) finds it difficult to meet the demand in real time, and the GPU has problems such as a “power wall” and “memory wall”. More and more researchers use the FPGA’s high parallelism, low power consumption, and other characteristics to realize the defogging of video images, as it contains rich logic resources, independent IP cores, and powerful programmability [5,6,7,9,11].
Optimizing video image defogging algorithms is gaining more attention as the requirements for edge deployment performance and defogging quality increase. Zhang et al. [5] used a low-pass filter to smooth the transmitted image and further reduce the computational complexity of the dark-channel prior algorithms. Li et al. [6] realized the defogging of the video image from the perspective of the traditional dark channel. Liu et al. [7], from the perspective of the sky region segmentation, reduced sky color distortion and estimated atmospheric light values and transmittance more accurately. Young et al. [9] implemented real-time image defogging by deploying a Contrast-Limited Adaptive Histogram Equalization Algorithm (CLAHE) on FPGAs. Munaf et al. [11] implemented image enhancement under low light intensity using the Retinex algorithm and Verilog HDL language implementation deployed on Xilinx FPGAs. Lv et al. [12] enhanced the speed of hardware defogging by differential guided filtering and parallel grayscale linear stretch fusion defogging hardware architecture. Teng et al. [13] optimized the convolutional neural network structure from the point of view of a lightweight neural network and achieved 105 frames per second of video processing.
Given the need for video image defogging to be deployed at the edge, we conducted experiments to improve the dark channel prior algorithm video defogging. The contributions of this paper are as follows:
(1)
The refinement of transmittance images by an improved guided filtering algorithm to reduce the halo effect;
(2)
It proposes the gamma correction method to realize image enhancement during the image restoration process;
(3)
Under the premise of improving the quality of defogging, the system can stably realize the video processing speed of 1280 × 720 @ 60 fps.
This paper is organized as follows: Section 2 presents the theory of the dark channel prior algorithm, the guided filtering algorithm, and gamma correction. Section 3 describes the overall flow of the improved dark-channel prior algorithm and analyzes in detail the dark-channel image module, the transmittance image module, and the image restoration module. Section 4 describes the analysis of the ZYNQ-based experimental platform. Section 5 describes the results of this experiment, which are analyzed in detail in terms of image quality, hardware power consumption, and resource usage. Section 6 outlines the summary of this experiment.

2. Principle of Image Defogging Algorithm

2.1. Dark Channel Prior Theory

The atmospheric scattering model [2,3,14] influenced Kaiming He et al. to propose the dark channel prior theory. Through the observation of fog-free pictures, it is found that for inputting an arbitrary pair of images, in the absence of fog, the pixel values of at least one of the three color channels of R, G, and B are low, and its dark channel value tends to zero, which can be formulated as:
J d a r k ( x ) = min y Ω ( x ) ( min c { r , g , b } J c ( y ) ) 0
where J c ( y ) represents the value of the pixel on color channel C (red, green, and blue), Ω ( x ) represents a square field window centered on x pixel with a filter of size n × n , and J d a r k ( x ) represents the dark channel image.
The atmospheric scattering model shows that it realizes the energy attenuation and path-radiation effects of light propagating in the atmosphere through the physical mechanisms of Rayleigh scattering and Mie scattering, as shown in Figure 1. Its mathematical expression is:
I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) )
where I ( x ) represents a foggy image captured by the imaging device; J ( x ) represents a fog-free image to be recovered; t ( x ) represents the image transmittance, which reflects the strength of the light reaching the imaging device through the particulate medium; and A represents the atmospheric light value. J ( x ) t ( x ) represents the amount of environmental objects that enter the imaging device after atmospheric scattering attenuation, and A ( 1 t ( x ) ) represents the amount of atmospheric light that enters the imaging device after scattering through the particulate medium.
To find the atmospheric transmittance t ˜ ( x ) , it is necessary to refine the image with a split-window refinement, and then two minimum filtering operations of Equation (2) can be obtained:
min y Ω ( x ) min c ( r , g , b ) I c ( y ) A = t ˜ ( x ) min y Ω ( x ) min c ( r , g , b ) J c ( y ) A + 1 t ˜ ( x )
where t ˜ ( x ) is the transmittance in the local region Ω ( x ) ; according to the dark channel prior principle, it is known that the dark channel value of the fog-free image tends to zero, and the transmittance can be estimated by bringing together Equations (1)–(3) as:
t ˜ ( x ) = 1 min y Ω ( x ) min c r , g , b I c ( y ) A
In the atmospheric environment, the influence of a particulate medium on object imaging exists both day and night. Therefore, a foggy influence factor w ( 0 < w < 1 ) is introduced into Equation (4). Usually, w = 0.95 , to preserve a moderate level of haze, making the image more realistic and reliable after defogging. The transmittance after adding the fog influence factor is calculated as:
t ˜ ( x ) = 1 w min y Ω ( x ) min c r , g , b I c ( y ) A
When the transmittance t ˜ ( x ) tends to zero, the value of J ( x ) is large, resulting in an overall white field situation in the image. To avoid this situation, a lower threshold value of t 0 = 0.1 is usually set for t ˜ ( x ) , and the expression of the recovered fog-free image is:
J ( x ) = I ( x ) A max ( t ˜ ( x ) , t 0 ) + A

2.2. Guided Filtering Theory

In a two-dimensional plane, the guided filter [10,12,16,17,19] has a local linear relationship between the guide image and the output image, which is able to preserve the edge information, and belongs to an efficient and information-extracting filtering method, which is defined as:
q i = a k I i + b k , i w k
where q represents the value of the output image pixel; a and b represent the fixed coefficients when the center of the current window w is located at k; I represents the value of the input guide image pixel; and i and k represent the pixel positions. In order to determine the linear coefficients, the difference between the minimization and the filter inputs, the minimization cost function is introduced in the window denoted as:
E ( a k , b k ) = i w k ( ( a k I i + b k p i ) 2 + ε a k 2 )
where ε represents the regularization factor that prevents the parameter a k from being too large, and p i represents the pixel value at position i in the target image; in this paper, p represents the coarse transmittance image to be filtered, and I represents the gray scale image of the original image as the guide image, with weights derived from the guide image and applied to the image to be filtered, p. The solution in Equation (8) using linear regression is:
a k = 1 | w | i w k I i p i μ k p ¯ k σ k 2 + ε
b k = p ¯ k a k μ k
where μ k and σ k 2 represent the mean and variance in the bootstrap image window, respectively; | w | represents the quantity of pixels in window I; and p ¯ k represents the mean within the input image p window. The linear model is applied to all the local windows in the whole image and averaged:
q i = 1 | w | k , i w k ( a k I i + b k ) = a ¯ i I i + b ¯ i
where a ¯ i = 1 | w | k w i a k and b ¯ i = 1 | w | k w i b k are the average values of pixel point i over multiple windows; it follows that a ¯ i and b ¯ i are spatially varying.

2.3. Gamma Correction

In order to counteract the nonlinear nature of the display device, gamma correction [3] is a common means of enhancing image contrast, as shown in Figure 2. Performing one gamma transformation, its mapping relation can be expressed as:
G ( I ) = r o u n d ( 255 × ( I 255 ) γ )
where I is the original image; G ( I ) is the output image after gamma transformation; and γ is the gamma correction coefficient; when 0 < γ < 1 , the details of the dark areas of the picture are enhanced, and the image as a whole becomes brighter; when γ = 1 , there is no change; when γ > 1 , the details of the highlighted areas of the image are enhanced, and the image as a whole becomes darker, and r o u n d ( ) is a rounding operation. Gamma correction with γ = 1 / 1.3 is used in this application to improve the contrast of the image after reducing the fog.

3. Systematic Implementation of Dark Channel Prior Algorithms

Through an in-depth analysis of the traditional dark channel prior defogging algorithm, this paper adopts a guided filtering algorithm for transmittance calculation while using gamma correction to optimize the image effect after defogging; the overall algorithm design structure is shown in Figure 3. The process mainly contains three parts: a dark channel calculation module, atmospheric light value and transmittance module, and image restoration module. Firstly, the minimum value of the RGB channel is calculated for the original image in the dark channel calculation module, and then the minimum value filtering is performed to generate the dark channel image J d a r k . In the atmospheric light value and transmittance calculation module, the maximum value in the RGB corresponding to this pixel is found from the pixel with the maximum dark channel value as the atmospheric light value A. According to Equations (4) and (5), and after repeated comparisons, this paper chooses the fog factor w = 0.9 as the best, and the refined transmittance t ˜ ( x ) is obtained by taking the coarse transmittance as the input image and then performing a guided filtering. The guide image is obtained by using the brightness weighted gray scale of the original image: I = 0.299 R + 0.587 G + 0.114 B . In the image restoration module, the A and t ˜ ( x ) calculated in the previous module are substituted into Equation (6) to obtain the restored image. Then, the final defogged image is obtained using gamma correction.

3.1. Dark Channel Image Research

The concentration of haze in the environment shows a linear relationship with the dark channel value; the denser the haze, the larger the value, and conversely, the less haze, the smaller the value. The dark channel images are mainly acquired by minimum value filtering [3,5], Gaussian filtering [3,15], mean value filtering, and other methods. This paper uses the minimum value filtering algorithm [20]; the algorithm acts on the RGB three-channel generated minimum value map of foggy images, and then, the RGB minimum value map is minimally filtered to obtain the dark channel image. This filtering traverses the entire image through a sliding window and replaces the center pixel’s value with the pixel’s minimum value inside the window. In order to save logic resources such as BRAM and DSP, and at the same time enhance the detail retention of the image edges, this paper uses a 3 × 3 filtering window and does not use the traditional 5 × 5 filtering window. Furthermore, to form the 3 × 3 window, it is necessary to buffer two lines of the input image [20]. This is accomplished using the two FIFOs as shown in Figure 4. When working, the first line of data of the fog image is written to FIFO2, and the first line of data is output to FIFO1. The second line of data is written to FIFO2, and when all the data are written to FIFO2, FIFO1 reads out the first line of data, FIFO2 reads out the second line of data, and the third line directly outputs. By utilizing separability, we use a vertical minimum operation on the data output from the delay module and then a comparator matrix on the calculation result to obtain the minimum for that module. The 3 × 3 minimum filtering module we have designed achieves a reduction in the traditional eight minimum operations to four minimum operations while improving the computational efficiency of the module. At each clock signal, the minimum value of the RGB channels and the minimum value after filtering are calculated repeatedly, and finally, the pixels at the frame rate are reconstructed to obtain the final dark channel image.

3.2. Transmittance Image Research

A transmittance image is the proportion of light intensity that is not scattered and absorbed during propagation of light through a medium of suspended particles in the environment, reflecting how much of the image contains a concentration of fog at that pixel point. For the calculation of transmittance, Zhang et al. [5] used a low-pass filter to optimize the transmittance, which may not be good for image restoration due to the inherent shortcomings of the low-pass filter. Rahmawati et al. [21] proposed Laplace filtering to enhance the estimation process of the transmittance image, which makes the edge details of the low-brightness channel image more visible. Xue et al. [22] used a multi-scale approach to optimize and integrate the transmittance images to improve the system robustness from the perspective of image enhancement algorithms. Liu et al. [2] proposed a dark channel-based adaptive defogging algorithm to enhance the estimation efficiency of background light and transmittance according to the optical characteristics of the underwater imaging device. Using a soft keying mechanism in the traditional dark channel principle can better achieve the transmittance optimization, but the overall implementation of the algorithm has problems such as large arithmetic volume and high complexity, and the implementation efficiency is difficult to ensure. Instead, this paper adopts the guided filtering algorithm to refine the transmittance and thus eliminate the block effect of the image and designs a 3 × 3 guided filter to refine the detail retention, as shown in Figure 5. In the implementation process, the data stream and algorithm logic are efficiently processed by the computation unit, and the fixed-point operation and IP core are combined to achieve efficient calculation, which is more suitable for hardware realization. First, in the FIFO row buffer, each column of the coarse transmittance image pixel data matrix P is added and summed, and then the square value of each pixel is obtained and summed. The formula is:
A 0 = P 00 + P 10 + P 20 s u m 1 _ A = A 0 + A 1 + A 2
B 0 = P 00 2 + P 10 2 + P 10 2 s u m 2 _ B = B 0 + B 1 + B 2
where P n n represents the pixel value within the current filter window; P n n 2 represents the square of each pixel value within the current window; A n represents the column summation of the pixel values; B n represents the square summation of the pixel values; and s u m 1 _ A and s u m 2 _ B represent the summation of A n and B n , respectively. The approximation of fixed-point multiplication is then performed for s u m 1 _ A and s u m 2 _ B , and a and a are solved:
P m = ( s u m 1 _ A × 114 ) 10 P P m = ( s u m 2 _ B × 114 ) 10 a = ( P P m P m · P m ) × 1024 ( P P m P m · P m ) + E b = ( 1024 a ) · P m
where ( s u m 1 _ A 114 ) > > 10 and ( s u m 2 _ B 114 ) > > 10 are realized by shift register and multiplication to approximate equivalent division 114 / 1024 1 / 9 ; P m represents the mean value of pixels in the current filter window; P P m represents the mean value of pixels squared in the current filter window; and a and b are the computational parameters in the current filter window. E is the regularization term in the guide filter formula. In this paper, the value of E is 100 to prevent division by zero and improve stability. Then, repeat the steps of Equations (13) and (15) for the matrices traversed by parameters a and b. Each column is summed, and then the fixed-point multiplication calculation is performed to find the averaged a m and b m . Finally, a m is multiplied by I and summed with b m . The final transmittance image data t ˜ equation is:
t ˜ = a m I + b m
where P is the input image from the coarse transmittance image, and I is the guide image data from the gray scale image of the original image. The latency of the filters requires delaying the image I by three rows (one row for calculating J d a r k , one row for calculating a and b, and one row for calculating a m and b m ). The process maintains the bit width consistency between the gray scale image and the coarse transmittance image while utilizing separability and stream processing, which means reducing the number of calculations by first calculating the data in the vertical direction and then calculating the data in the horizontal direction. In addition, floating-point operations are converted to fixed-point operations to avoid state overflow and retain sufficient data precision. With the arrival of each clock signal, the filter continuously performs filtering operations and finally reconstructs the data of that pixel to obtain a refined transmittance image.

3.3. Image Restoration Research

Image restoration is the computation of fog-free images using estimated atmospheric light values and transmittance maps to recover the true color and contrast of the scene. In the image restoration process, the loss of details in the low-brightness regions of the image is minimized, and the over-enhancement of the image in the high-brightness regions is avoided. Yu et al. [23] used an adaptive histogram equalization algorithm to achieve contrast enhancement of output images from underwater imaging environments. Zhang et al. [16] used contrast stretching means in the reduction process in order to enhance the overall brightness of the image. What is more, this paper proposes a gamma correction approach to make the recovered image closer to the natural visualization of the human eye. Gamma correction is a nonlinear exponential mapping, acting on grayscale images. Moreover, this paper utilizes the look-up table (LUT) method to realize the Gamma transform, where the γ = 1 / 1.3 expression is:
G ( I ) = 255 × ( I 255 ) 1 / 1.3
Then, each of the three RGB color channels are Gamma corrected to obtain new pixel data, as shown in Figure 6. By individually modularizing the correction for each color channel, the details of the low-brightness part are enhanced, and the overall contrast of the image is enhanced. In addition, a data delay register module is designed to ensure the accuracy of the timing logic and prevent image distortion.

4. Analysis of ZYNQ-Based Experimental Platform

4.1. Experimental Platforms

The experimental platform of this paper is I5-12600KF+RTX4060 (Windows 11 system), the MATLAB (version R2023a) software platform, and the Vivado and Vitis (version 2023.2) hardware platforms, and the development board model is ZYNQ-7020 series of Xilinx company. The hardware platform includes a video capture unit, video processing unit, video caching unit, and video display unit; the system’s real-time defogging system block diagram and the experimental development environment are shown in Figure 7 and Figure 8, respectively. Figure 8 in the left display shows the defogged image, and the right shows the foggy image. Video acquisition unit: real-time data acquisition is achieved through the OV5640 camera, capable of realizing 1280 × 720 @ 60fps video images. Video processing unit: the optimized design of the dark channel prior algorithm is encapsulated in the form of a RAM IP core, forming a complete IP Block Design module. Video cache unit: DDR3 is utilized for frame caching for each frame of captured data and processed data. Video display unit: high-speed image data transmission is through the configuration of VDMA IP cores and conversion of the video data in DDR3 to an HDMI display.

4.2. ZYNQ System Architecture

The ZYNQ platform of this experiment uses the chip model XC7Z020CLG400 chip [7,24,25,26], which integrates a dual-core ARM Cortex-A9 processing system (PS) and FPGA programmable logic (PL) based on the Artix-7 architecture. The PS completes the acquiring and storing of images, while the PL realizes the image processing and display. They interconnect the hardware and software through the high-speed AXI bus to realize the high efficiency of the video image defogging system. The core experimental platform is shown in Figure 9. The clock frequency of the development board is usually 100 MHZ, and the clock frequency of this paper’s design is 75 MHZ; the on-chip memory is 256 KB; there are 85,000 logic units; the number of I/O pins is 200; the maximum operating temperature is 100 degrees Celsius; and the external interfaces support USB2.0, DDR3, I2C, UART, and so on.

5. Experimental Results and Data Analysis

5.1. Defogging Effect Analysis

To analyze the defogging influence of this paper’s algorithm, the defogging effects of References [4,27,28] were compared, as shown in Figure 10, with five scene pictures labeled (1), (2), (3), (4), and (5) from top to bottom, respectively. As illustrated in the figure, picture (1) processed by the algorithm of Reference [27] has a more obvious halo effect, the articulation between the distant objects and the near objects in the picture is not natural, and the effect is worse in the low-light environment in picture (5). The algorithm of Reference [4] has an obvious blocky effect in picture (4), and there is a color shift for the distant objects in picture (2). For the algorithm of Reference [28], processed images are darker overall, and there are shortcomings such as sky color aberration in picture (2) and picture (3). In contrast, the defogged pictures processed by the algorithm in this paper have no obvious halo effect and color distortion, the excess between the near-view objects and the far-view objects is more natural, and the picture clarity is higher.
This paper objectively analyzes the defogging effect using four indexes: UIQM, AG, IE, and running time. The defogged images from the proposed algorithm are compared numerically with the algorithms from References [4,27,28]. UIQM is a weighted calculation of the underwater image color measurement (UICM), underwater image sharpness measurement (UISM), and underwater image contrast measurement (UIConM). The higher the value of UIQM, the better the image’s visual quality. AG is an edge detection method based on the Sobel operator, which globally averages the gradient amplitude in the horizontal and vertical directions of the image. The larger the value of AG, the clearer the image edge and the richer the details. IE is a statistical measure of the distribution of pixel values in an image; the larger the IE, the more information the picture expresses. In this paper, the IE is expressed by Shannon entropy. The running time represents the simulation time for each frame in the PC software side and FPGA hardware side. As can be seen in Table 1, the AG value and IE of images (1)–(5) after processing by the algorithm of this paper are larger than those of the previous three algorithms, while the UIQM value of images (3) and (4) appears to be slightly lower than those of Reference [27], which may be affected by the gamma correction to change the contrast of the images. In addition, the overall running time of this algorithm is shorter and more efficient. On the whole, the algorithm in this paper is implemented better.

5.2. Power Consumption and Resource Utilization Analysis

Based on the ZYNQ-7020 series development board for video image defogging, its power consumption is shown in Table 2; according to the figure, it is observed that the dynamic power consumption accounts for 93% of the total power consumption, the value of 2.082 W, while the static power consumption accounts for 7% of the total power consumption, the value of 0.160 W, and the overall value of the power consumption is 2.242 W. Compared with the power consumption of Reference [29], which is 2.283 W, and the power consumption of Reference [30], which is 6.958 W, the power consumption in this paper is lower. The resource utilization of the FPGA platform is shown in Table 3. Compared with Reference [29], this paper has less BRAM and DSP usage but more LUT and FF usage; this phenomenon may be attributed to the design of the guided filter and gamma correction module, which adds a large number of logic operation units. Compared with Reference [30], this paper has less primary resource usage in LUT, FF, BRAM, and DSP, which ensures the overall development and design of the system. Overall, the design of this system falls into the category of low energy consumption and high efficiency.

6. Conclusions

To meet the needs of video image defogging in edge devices, the dark channel prior algorithm is optimized, the guided filtering is improved to refine the transmittance image, and gamma correction is used to restore the fog-free image. The system adopts an FPGA parallel pipeline design, mainly applicable to foggy scenes, reduces computational complexity, and is more suitable for hardware transplantation. The experimental results of the FPGA-based video image defogging algorithm show that the image is improved in terms of UIQM, AG, and IE, and the algorithm can stably process 1280×720@60fps video images, with low power and resource consumption at the board level, which meets the requirements of engineering applications. Further detailed optimization for dense fog scenes will be carried out in the future.

Author Contributions

Conceptualization: Z.L.; methodology: L.W.; investigation: L.G.; writing—original draft preparation: L.W.; writing—review and editing: Z.L.; supervision: Z.L.; project administration: Z.L.; funding acquisition: Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61801319, in part by the Sichuan Science and Technology Program under Grant 2020JDJQ0061 and 2021YFG0099, in part by the Opening Project of Artificial Intelligence Key Laboratory of Sichuan Province under Grant 2021RZJ01, in part by the Scientific Research and Innovation Team Program of Sichuan University of Science and Engineering under Grant SUSE652A011, in part by the Postgraduate Innovation Fund Project of Sichuan University of Science and Engineering under Grant Y2024301, and in part by the Exploration and Practice of the Path to Improve the Quality of Master’s Degree Cultivation of Electronic Information Students Empowered by Numerical Intelligence JG202405.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, X.; Chen, X.; Wang, X.; Zhang, X.; Yuan, S.; Sun, B.; Huang, X.; Liu, L. A real-time framework for HD video defogging using modified dark channel prior. J. Real-Time Image Process. 2024, 21, 55. [Google Scholar] [CrossRef]
  2. Liu, S.; Chen, P.; Lan, J.; Li, J.; Shen, Z.; Wang, Z. Underwater image restoration via multiscale optical attenuation compensation and adaptive dark channel dehazing. Comput. Electr. Eng. 2025, 123, 110228. [Google Scholar] [CrossRef]
  3. Pandey, P.; Gupta, R.; Goel, N. Comprehensive review of single image defogging techniques: Enhancement, prior, and learning based approaches. Artif. Intell. Rev. 2025, 58, 116. [Google Scholar] [CrossRef]
  4. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  5. Zhang, T. Design of an FPGA-based Image Defogging Algorithm Using Dark Channel Prior. In Proceedings of the 2024 4th International Conference on Electronic Information Engineering and Computer Science (EIECS), Yanji, China, 27–29 September 2024; IEEE: New York, NY, USA, 2024; pp. 464–468. [Google Scholar]
  6. Li, J.; Guo, Y.; Li, G. FPGA Implementation of Dark Channel Prior and Defogging for Video Images. In Electronic Engineering and Informatics; IOS Press: Amsterdam, The Netherlands, 2024; pp. 705–712. [Google Scholar]
  7. Liu, B.; Wei, Q.; Ding, K. ZYNQ-Based Visible Light Defogging System Design Realization. Sensors 2024, 24, 2276. [Google Scholar] [CrossRef] [PubMed]
  8. Gang, H.; DaTang, Z.; LaiJun, Y.; WenXin, Y.; Kang, X.; Shuang, C.; ZhenGuo, C. Single image dehazing algorithm using complementary saturation prior. Signal Image Video Process. 2025, 19, 224. [Google Scholar] [CrossRef]
  9. Yoong, N.K.J. CLAHE HLS implementation on Zynq SoC FPGA 2024. Available online: https://dr.ntu.edu.sg/handle/10356/176412 (accessed on 28 March 2025).
  10. Sun, A.; Wang, Y.; Yang, Q. Welding Image Enhancement Based on CLAHE and Guided Filter. In Proceedings of the 2024 10th International Conference on Electrical Engineering, Control and Robotics (EECR), Guangzhou, China, 29–31 March 2024; IEEE: New York, NY, USA, 2024; pp. 285–290. [Google Scholar]
  11. Munaf, S.; Bharathi, A.; Jayanthi, A. FPGA-based low-light image enhancement using Retinex algorithm and coarse-grained reconfigurable architecture. Sci. Rep. 2024, 14, 28770. [Google Scholar] [CrossRef] [PubMed]
  12. Lv, T.; Du, G.; Li, Z.; Wang, X.; Teng, P.; Ni, W.; Ouyang, Y. A fast hardware accelerator for nighttime fog removal based on image fusion. Integration 2024, 99, 102256. [Google Scholar] [CrossRef]
  13. Teng, P.; Du, G.; Li, Z.; Wang, X.; Yin, Y. High-speed hardware accelerator based on brightness improved by light-dehazenet. J. Real-Time Image Process. 2024, 21, 87. [Google Scholar] [CrossRef]
  14. Shen, M.; Lv, T.; Liu, Y.; Zhang, J.; Ju, M. A Comprehensive Review of Traditional and Deep-Learning-Based Defogging Algorithms. Electronics 2024, 13, 3392. [Google Scholar] [CrossRef]
  15. Kumari, A.; Sahoo, S.K. An effective and robust single-image dehazing method based on gamma correction and adaptive Gaussian notch filtering. J. Supercomput. 2024, 80, 9253–9276. [Google Scholar] [CrossRef]
  16. Zhang, S.; Tian, Y.; Shen, L.; Wang, H.; Du, Y.; Chen, H. Single image defogging method based on optimized double dark channel with gaussian weighting. In Proceedings of the 2022 6th International Conference on Electronic Information Technology and Computer Engineering, Xiamen China, 21–23 October 2022; pp. 1489–1493. [Google Scholar]
  17. Yadav, S.K.; Sarawadekar, K. Robust multi-scale weighting-based edge-smoothing filter for single image dehazing. Pattern Recognit. 2024, 149, 110137. [Google Scholar] [CrossRef]
  18. Vedavyas, Y.; Harsha, S.S.; Subhash, M.S.; Vasavi, S. Quality Enhancement for Drone Based Video using FPGA. In Proceedings of the 2022 International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 16–18 March 2022; IEEE: New York, NY, USA, 2022; pp. 29–34. [Google Scholar]
  19. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  20. Bailey, D.G. Design for Embedded Image Processing on FPGAs; John Wiley & Sons: Hoboken, NJ, USA, 2023. [Google Scholar]
  21. Rahmawati, L.; Rustad, S.; Marjuni, A.; Arief, M.; Soeleman, C.S.; Shidik, G.F. Transmission Map Refinement Using Laplacian Transform on Single Image Dehazing Based on Dark Channel Prior Approach. Cybern. Inf. Technol. 2024, 24, 126–142. [Google Scholar] [CrossRef]
  22. Xue, Q.; Hu, H.; Bai, Y.; Cheng, R.; Wang, P.; Song, N. Underwater image enhancement algorithm based on color correction and contrast enhancement. Vis. Comput. 2024, 40, 5475–5502. [Google Scholar] [CrossRef]
  23. Yu, J.; Zhang, J.; Li, B.; Ni, X.; Mei, J. Nighttime image dehazing based on bright and dark channel prior and gaussian mixture model. In Proceedings of the 2023 6th International Conference on Image and Graphics Processing, Chongqing, China, 6–8 January 2023; pp. 44–50. [Google Scholar]
  24. Akash, E.; Ashokkumar, S.; Banupriya, N.; Thangam, K.S.; Kumar, N.S.; Jananeshwaran, M. Hardware Implementations of Dehazing Algorithms. In Proceedings of the 2024 11th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 28 February–1 March 2024; IEEE: New York, NY, USA, 2024; pp. 1654–1661. [Google Scholar]
  25. Ngo, D.; Kang, B. A Symmetric Multiprocessor System-on-a-Chip-Based Solution for Real-Time Image Dehazing. Symmetry 2024, 16, 653. [Google Scholar] [CrossRef]
  26. Suo, H.; Guan, J.; Ma, M.; Huo, Y.; Cheng, Y.; Wei, N.; Zhang, L. Dynamic dark channel prior dehazing with polarization. Appl. Sci. 2023, 13, 10475. [Google Scholar] [CrossRef]
  27. Ehsan, S.M.; Imran, M.; Ullah, A.; Elbasi, E. A single image dehazing technique using the dual transmission maps strategy and gradient-domain guided image filtering. IEEE Access 2021, 9, 89055–89063. [Google Scholar] [CrossRef]
  28. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [PubMed]
  29. Cong, H. Optimization and Implementation of ZYNQ-Based Dark Channel Prior Defogging Algorithm. Master’s Thesis, Sichuan Normal University, Chengdu, China, 2024. [Google Scholar]
  30. Jianyu, H. Research on FPGA-Based Nighttime Image Enhancement Algorithm. Master’s Thesis, China University of Mining and Technology, Xuzhou, China, 2023. [Google Scholar]
Figure 1. Atmospheric scattering model.
Figure 1. Atmospheric scattering model.
Symmetry 17 00839 g001
Figure 2. Effect of different gamma values. (a) Original image, (b) γ = 0.5 , (c) γ = 1.5 , (d) γ = 2 .
Figure 2. Effect of different gamma values. (a) Original image, (b) γ = 0.5 , (c) γ = 1.5 , (d) γ = 2 .
Symmetry 17 00839 g002
Figure 3. Flowchart of dark channel prior algorithm improvement.
Figure 3. Flowchart of dark channel prior algorithm improvement.
Symmetry 17 00839 g003
Figure 4. Minimum value filtering using a separable 3 × 3 minimum filter.
Figure 4. Minimum value filtering using a separable 3 × 3 minimum filter.
Symmetry 17 00839 g004
Figure 5. Guided filter design using separable stream processing.
Figure 5. Guided filter design using separable stream processing.
Symmetry 17 00839 g005
Figure 6. Schematic diagram of gamma correction for color images.
Figure 6. Schematic diagram of gamma correction for color images.
Symmetry 17 00839 g006
Figure 7. Block diagram of real-time defogging system.
Figure 7. Block diagram of real-time defogging system.
Symmetry 17 00839 g007
Figure 8. Physical diagram of the experimental development environment.
Figure 8. Physical diagram of the experimental development environment.
Symmetry 17 00839 g008
Figure 9. Core experimental platform. (a) Development board physical diagram, (b) ZYNQ image defogging system architecture.
Figure 9. Core experimental platform. (a) Development board physical diagram, (b) ZYNQ image defogging system architecture.
Symmetry 17 00839 g009
Figure 10. Comparison of results of different defogging algorithms. (a) Original image, (b) He’s algorithm [4], (c) Ehsan’s algorithm [27], (d) Zhu’s algorithm [28], (e) algorithm of this paper.
Figure 10. Comparison of results of different defogging algorithms. (a) Original image, (b) He’s algorithm [4], (c) Ehsan’s algorithm [27], (d) Zhu’s algorithm [28], (e) algorithm of this paper.
Symmetry 17 00839 g010
Table 1. Objective quality analysis of different defogging methods.
Table 1. Objective quality analysis of different defogging methods.
ImageryAlgorithmIE/bitUIQMAGRunning Time (PC)/(FPGA)/s
(1)Original7.26683.892321.5147-/-
He [4]7.38276.896234.01084.8684/-
Ehsan [27]7.08806.938333.28667.7856/-
Zhu [28]7.54376.894331.94472.0293/-
Proposed7.54637.354642.27653.1853/0.0138
(2)Original7.45543.064212.6117-/-
He [4]6.91306.448314.44204.3005/-
Ehsan [27]6.72275.478215.81637.9553/-
Zhu [28]7.54275.367913.96311.8057/-
Proposed7.45606.584227.72302.9842/0.0138
(3)Original7.24924.284722.3559-/-
He [4]7.38277.489136.64224.4382/-
Ehsan [27]7.28378.220438.33396.9987/-
Zhu [28]7.43177.754831.72081.7906/-
Proposed7.59038.067141.91833.0853/0.0138
(4)Original7.52954.728123.6353-/-
He [4]6.90657.893129.84174.1062/-
Ehsan [27]6.82688.014230.83517.1346/-
Zhu [28]7.10687.076328.75052.3571/-
Proposed7.14287.714936.98363.0356/0.0138
(5)Original7.21073.364812.2805-/-
He [4]7.01036.067118.99044.7693/-
Ehsan [27]6.74716.483721.44877.6148/-
Zhu [28]7.35045.330116.49521.9250/-
Proposed7.43356.674827.16443.0961/0.0138
Table 2. On-chip power consumption.
Table 2. On-chip power consumption.
TypesTargetsOverall
DynamicClocksSignalsLogicBRAMDSPMMCMI/OPS7
0.088 W0.103 W0.093 W0.007 W0.009 W0.106 W0.133 W1.543 W2.082 W
4%5%4%1%1%5%6%74%93%
Static-0.160 W
-7%
Table 3. Resource utilization.
Table 3. Resource utilization.
ReferenceResourceResource ConsumptionAvailable ResourcesUtilization/%
ProposedLUT898653,20016.89
FF12,192106,40011.46
BRAM71405.00
DSP62202.73
[29]LUT770553,20014.48
FF10,936106,40010.28
BRAM121408.57
DSP82203.64
[30]LUT23,49653,20045
FF39,875106,40039
BRAM16022076
DSP4214031
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Luo, Z.; Gao, L. An Improved Dark Channel Prior Method for Video Defogging and Its FPGA Implementation. Symmetry 2025, 17, 839. https://doi.org/10.3390/sym17060839

AMA Style

Wang L, Luo Z, Gao L. An Improved Dark Channel Prior Method for Video Defogging and Its FPGA Implementation. Symmetry. 2025; 17(6):839. https://doi.org/10.3390/sym17060839

Chicago/Turabian Style

Wang, Lin, Zhongqiang Luo, and Li Gao. 2025. "An Improved Dark Channel Prior Method for Video Defogging and Its FPGA Implementation" Symmetry 17, no. 6: 839. https://doi.org/10.3390/sym17060839

APA Style

Wang, L., Luo, Z., & Gao, L. (2025). An Improved Dark Channel Prior Method for Video Defogging and Its FPGA Implementation. Symmetry, 17(6), 839. https://doi.org/10.3390/sym17060839

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop