Single-Image Visibility Restoration: A Machine Learning Approach and Its 4K-Capable Hardware Accelerator
Abstract
:1. Introduction
- This study is the first attempt to address the problem of the false enlargement of white objects. Based on the observation of failure of current methods in estimating atmospheric light in scenes containing white objects, an adaptive compensation scheme is proposed to offset the light in such cases.
- Prior to the aforementioned compensation step, a parallel algorithm is developed based on quad-decomposition to estimate the atmospheric light coarsely. This newly proposed method is beneficial to the hardware implementation phase due to eliminating burdensome image buffers and is a substantial contributor to the hardware architecture’s 4K capability.
- Furthermore, a novel hardware architecture was developed to realize the modified hybrid median filter. Although the previously developed architecture based on Batcher’s sorting network [18] is considerably compact and fast, the proposed design, which exploits both sorting and merging networks, is established to be even more efficient. This newly proposed architecture significantly contributes to the 4K capability of the proposed hardware accelerator.
2. Preliminaries
2.1. Koschmieder’s Law
2.2. Related Work
3. Proposed Algorithm
- a solution to the issue of false enlargement of white objects,
- an image buffer-free parallel computing scheme for atmospheric light estimation,
- and an optimized merging sorting network to implement the modified hybrid median filter.
3.1. Improved Color Attenuation Prior
3.1.1. Enhanced Equidistribution for a More Reliable Training Dataset
3.1.2. Adaptive Constraints for The Transmission Map
3.1.3. Solutions for Background Noises and Color Distortion
3.1.4. Adaptive Tone Remapping
3.2. Atmospheric Light Estimation and Compensation Scheme for False Enlargement of White Objects
3.3. Experimental Validation
3.3.1. Quantitative Evaluation
3.3.2. Qualitative Evaluation
4. A 4K-Capable Hardware Accelerator
4.1. Overall Architecture
- RGB-to-HSV conversion,
- low-pass filtering on the saturation channel to suppress background noise,
- and depth map estimation using Equation (4) with predetermined parameters via MLE.
4.2. Optimized Merging Sorting Network-Based Architecture for the Modified Hybrid Median Filter
- Only sorting pixels within one of the two small windows, e.g., the cross window, to identify the corresponding median.
- For the diagonal window, sorting corresponding pixels except for the central one and then merging them with the delayed central pixel to identify the median.
- For the square window, only sorting those pixels that have not been sorted during the previous two steps and merging them with the two sorted sequences to identify the corresponding median.
- Lastly, selecting the final median from the medians corresponding to the three windows.
4.3. Atmospheric Light Estimation and Compensation
4.4. Hardware Verification
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Chen, X.; Wang, S.; Shi, C.; Wu, H.; Zhao, J.; Fu, J. Robust Ship Tracking via Multi-view Learning and Sparse Representation. J. Navig. 2019, 72, 176–192. [Google Scholar] [CrossRef]
- Sengee, N.; Sengee, A.; Choi, H.K. Image contrast enhancement using bi-histogram equalization with neighborhood metrics. IEEE Trans. Consum. Electron. 2010, 56, 2727–2734. [Google Scholar] [CrossRef]
- Tan, S.F.; Isa, N.A.M. Exposure Based Multi-Histogram Equalization Contrast Enhancement for Non-Uniform Illumination Images. IEEE Access 2019, 7, 70842–70861. [Google Scholar] [CrossRef]
- Ngo, D.; Kang, B. Preprocessing for High Quality Real-time Imaging Systems by Low-light Stretch Algorithm. J. Inst. Korean. Electr. Electron. Eng. 2018, 22, 585–589. [Google Scholar] [CrossRef]
- Ngo, D.; Lee, S.; Kang, B. Nonlinear Unsharp Masking Algorithm. In Proceedings of the 2020 International Conference on Electronics, Information, and Communication (ICEIC), Barcelona, Spain, 19–20 January 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Polesel, A.; Ramponi, G.; Mathews, V. Image enhancement via adaptive unsharp masking. IEEE Trans. Image Process. 2000, 9, 505–510. [Google Scholar] [CrossRef] [Green Version]
- Fries, R.; Modestino, J. Image enhancement by stochastic homomorphic filtering. IEEE Trans. Signal Process. 1979, 27, 625–637. [Google Scholar] [CrossRef]
- Kaufman, H.; Sid-Ahmed, M. Hardware realization of a 2D IIR semisystolic filter with application to real-time homomorphic filtering. IEEE Trans. Circuits Syst. Video Technol. 1993, 3, 2–14. [Google Scholar] [CrossRef]
- Lee, Z.; Shang, S. Visibility: How Applicable is the Century-Old Koschmieder Model? J. Atmos. Sci. 2016, 73, 4573–4581. [Google Scholar] [CrossRef]
- Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef]
- Tarel, J.P.; Hautière, N. Fast visibility restoration from a single color or gray level image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2201–2208. [Google Scholar] [CrossRef]
- Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar] [CrossRef]
- Ngo, D.; Lee, S.; Nguyen, Q.H.; Ngo, T.M.; Lee, G.D.; Kang, B. Single Image Haze Removal from Image Enhancement Perspective for Real-Time Vision-Based Systems. Sensors 2020, 20, 5170. [Google Scholar] [CrossRef] [PubMed]
- Papyan, V.; Elad, M. Multi-Scale Patch-Based Image Restoration. IEEE Trans. Image Process. 2016, 25, 249–261. [Google Scholar] [CrossRef] [PubMed]
- Park, D.; Park, H.; Han, D.K.; Ko, H. Single image dehazing with image entropy and information fidelity. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4037–4041. [Google Scholar] [CrossRef]
- Ngo, D.; Lee, G.D.; Kang, B. A 4K-Capable FPGA Implementation of Single Image Haze Removal Using Hazy Particle Maps. Appl. Sci. 2019, 9, 3443. [Google Scholar] [CrossRef] [Green Version]
- Levin, A.; Lischinski, D.; Weiss, Y. A Closed-Form Solution to Natural Image Matting. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 228–242. [Google Scholar] [CrossRef] [Green Version]
- Li, C.; Zhang, X. Underwater Image Restoration Based on Improved Background Light Estimation and Automatic White Balance. In Proceedings of the 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Beijing, China, 13–15 October 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Lee, S.; Yun, S.; Nam, J.H.; Won, C.S.; Jung, S.W. A review on dark channel prior based image dehazing algorithms. EURASIP J. Image Video Process. 2016, 2016, 4. [Google Scholar] [CrossRef] [Green Version]
- Zhu, Y.; Tang, G.; Zhang, X.; Jiang, J.; Tian, Q. Haze removal method for natural restoration of images with sky. Neurocomputing 2018, 275, 499–510. [Google Scholar] [CrossRef]
- Park, Y.; Kim, T.H. Fast Execution Schemes for Dark-Channel-Prior-Based Outdoor Video Dehazing. IEEE Access 2018, 6, 10003–10014. [Google Scholar] [CrossRef]
- Tufail, Z.; Khurshid, K.; Salman, A.; Fareed Nizami, I.; Khurshid, K.; Jeon, B. Improved Dark Channel Prior for Image Defogging Using RGB and YCbCr Color Space. IEEE Access 2018, 6, 32576–32587. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
- Gibson, K.B.; Vo, D.T.; Nguyen, T.Q. An Investigation of Dehazing Effects on Image and Video Coding. IEEE Trans. Image Process. 2012, 21, 662–673. [Google Scholar] [CrossRef] [PubMed]
- Kim, G.J.; Lee, S.; Kang, B. Single Image Haze Removal Using Hazy Particle Maps. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2018, E101-A, 1999–2002. [Google Scholar]
- Tang, K.; Yang, J.; Wang, J. Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2995–3002. [Google Scholar] [CrossRef] [Green Version]
- Ngo, D.; Lee, S.; Kang, B. Robust Single-Image Haze Removal Using Optimal Transmission Map and Adaptive Atmospheric Light. Remote Sens. 2020, 12, 2233. [Google Scholar] [CrossRef]
- Choi, L.K.; You, J.; Bovik, A.C. Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging. IEEE Trans. Image Process. 2015, 24, 3888–3901. [Google Scholar] [CrossRef] [PubMed]
- Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [Green Version]
- Li, C.; Guo, J.; Porikli, F.; Fu, H.; Pang, Y. A Cascaded Convolutional Neural Network for Single Image Dehazing. IEEE Access 2018, 6, 24877–24887. [Google Scholar] [CrossRef]
- Golts, A.; Freedman, D.; Elad, M. Unsupervised Single Image Dehazing Using Dark Channel Prior Loss. IEEE Trans. Image Process. 2020, 29, 2692–2701. [Google Scholar] [CrossRef] [Green Version]
- Ren, W.; Pan, J.; Zhang, H.; Cao, X.; Yang, M.H. Single Image Dehazing via Multi-scale Convolutional Neural Networks with Holistic Edges. Int. J. Comput. Vis. 2020, 128, 240–259. [Google Scholar] [CrossRef]
- Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking Single-Image Dehazing and Beyond. IEEE Trans. Image Process. 2019, 28, 492–505. [Google Scholar] [CrossRef] [Green Version]
- Ngo, D.; Lee, G.D.; Kang, B. Improved Color Attenuation Prior for Single-Image Haze Removal. Appl. Sci. 2019, 9, 4011. [Google Scholar] [CrossRef] [Green Version]
- Ngo, D.; Kang, B. A New Data Preparation Methodology in Machine Learning-based Haze Removal Algorithms. In Proceedings of the 2019 International Conference on Electronics, Information, and Communication (ICEIC), Auckland, New Zealand, 22–25 January 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Ngo, D.; Kang, B. Improving Performance of Machine Learning-based Haze Removal Algorithms with Enhanced Training Database. J. Inst. Korean Electr. Electron. Eng. 2018, 22, 948–952. [Google Scholar] [CrossRef]
- Cho, H.; Kim, G.J.; Jang, K.; Lee, S.; Kang, B. Color Image Enhancement Based on Adaptive Nonlinear Curves of Luminance Features. J. Semicond. Technol. Sci. 2015, 15, 60–67. [Google Scholar] [CrossRef]
- Tarel, J.P.; Hautiere, N.; Caraffa, L.; Cord, A.; Halmaoui, H.; Gruyer, D. Vision Enhancement in Homogeneous and Heterogeneous Fog. IEEE Intell. Transp. Syst. Mag. 2012, 4, 6–20. [Google Scholar] [CrossRef] [Green Version]
- Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C. D-HAZY: A dataset to evaluate quantitatively dehazing algorithms. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2226–2230. [Google Scholar] [CrossRef]
- Ma, K.; Liu, W.; Wang, Z. Perceptual evaluation of single image dehazing algorithms. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 3600–3604. [Google Scholar] [CrossRef]
- Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. O-HAZE: A Dehazing Benchmark with Real Hazy and Haze-Free Outdoor Images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 867–8678. [Google Scholar] [CrossRef] [Green Version]
- Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images. arXiv 2018, arXiv:1804.05091 [cs] 2018. [Google Scholar]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Yeganeh, H.; Wang, Z. Objective Quality Assessment of Tone-Mapped Images. IEEE Trans. Image Process. 2013, 22, 657–667. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hautiere, N.; Tarel, J.P.; Aubert, D.; Dumont, E. Blind Contrast Enhancement Assessment by Gradient Ratioing at Visible Edges. Image Anal. Stereol. 2008, 27, 87–95. [Google Scholar] [CrossRef]
- Jack, K. Chapter 9 - NTSC and PAL Digital Encoding and Decoding. In Video Demystified, 4th ed.; Jack, K., Ed.; Newnes: Newton, MA, USA, 2005; pp. 394–471. [Google Scholar] [CrossRef]
- STD90 Samsung 0.35μm 3.3V CMOS Standard Cell Library for Pure Logic/MDL Products. Available online: https://www.chipfind.net/datasheet/samsung/std90.htm (accessed on 9 May 2019).
- Knuth, D.E. The Art of Computer Programming, Volume 3: Sorting and Searching, 2nd ed.; Addison Wesley Longman Publishing Co., Inc.: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
- Zynq-7000 SoC Data Sheet: Overview (DS190). Available online: https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf (accessed on 12 May 2019).
- IEEE Standard for Verilog Hardware Description Language. IEEE Std 1364-2005 2006. [CrossRef]
- Park, Y.; Kim, T.H. A video dehazing system based on fast airlight estimation. In Proceedings of the 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Montreal, QC, Canada, 14–16 November 2017; pp. 779–783. [Google Scholar] [CrossRef]
Pixel | RGB Values | |
---|---|---|
Before Dehazing | After Dehazing | |
A | [0.7529, 0.7529, 0.7529] | [0.8000, 0.8000, 0.8000] |
B | [0.9804, 0.7961, 0.6235] | [1.0000, 0.8549, 0.5020] |
C | [0.9373, 0.7569, 0.5961] | [1.0000, 0.7804, 0.4627] |
D | [0.9922, 0.8078, 0.6314] | [1.0000, 0.8784, 0.5176] |
Line | Diameters (pixels) | ||
---|---|---|---|
Input Image | ICAP’s Output | ICAP’s Output with | |
the Proposed Solution | |||
157 | 20 | 22 | 20 |
184 | Left = 18, Right = 21 | Left = 21, Right = 23 | Left = 18, Right = 21 |
Method | Haze Type | SSIM | TMQI | FSIMc | FADE |
---|---|---|---|---|---|
He et al. [11] | Homogeneous | 0.6653 | 0.7639 | 0.8168 | 1.0177 |
Heterogeneous | 0.5374 | 0.6894 | 0.7251 | 1.2793 | |
Cloudy Homogeneous | 0.5349 | 0.6849 | 0.7222 | 1.2587 | |
Cloudy Heterogeneous | 0.6500 | 0.7781 | 0.8343 | 1.0792 | |
Overall Average | 0.5969 | 0.7291 | 0.7746 | 1.1587 | |
Tarel et al. [12] | Homogeneous | 0.7096 | 0.7259 | 0.7833 | 0.9307 |
Heterogeneous | 0.6970 | 0.7310 | 0.7725 | 1.4961 | |
Cloudy Homogeneous | 0.6719 | 0.7312 | 0.7567 | 1.3583 | |
Cloudy Heterogeneous | 0.7431 | 0.7373 | 0.8104 | 1.1021 | |
Overall Average | 0.7054 | 0.7314 | 0.7807 | 1.2218 | |
Zhu et al. [13] | Homogeneous | 0.5651 | 0.7533 | 0.7947 | 0.5527 |
Heterogeneous | 0.5519 | 0.7254 | 0.7845 | 0.9599 | |
Cloudy Homogeneous | 0.5310 | 0.7080 | 0.7764 | 0.8267 | |
Cloudy Heterogeneous | 0.5412 | 0.7674 | 0.8117 | 0.6752 | |
Overall Average | 0.5473 | 0.7385 | 0.7918 | 0.7536 | |
Kim et al. [27] | Homogeneous | 0.5949 | 0.7320 | 0.8048 | 0.9675 |
Heterogeneous | 0.6245 | 0.7037 | 0.7805 | 1.6836 | |
Cloudy Homogeneous | 0.6124 | 0.7015 | 0.7751 | 1.5741 | |
Cloudy Heterogeneous | 0.6078 | 0.7343 | 0.8135 | 1.0774 | |
Overall Average | 0.6099 | 0.7179 | 0.7935 | 1.3256 | |
Ngo et al. [36] | Homogeneous | 0.7022 | 0.7475 | 0.8013 | 0.7825 |
Heterogeneous | 0.7089 | 0.7318 | 0.7919 | 1.1610 | |
Cloudy Homogeneous | 0.6918 | 0.7268 | 0.7854 | 1.0711 | |
Cloudy Heterogeneous | 0.7253 | 0.7539 | 0.8152 | 0.8895 | |
Overall Average | 0.7070 | 0.7400 | 0.7984 | 0.9761 | |
Proposed Algorithm | Homogeneous | 0.7039 | 0.7491 | 0.8020 | 0.7856 |
Heterogeneous | 0.7046 | 0.7339 | 0.7918 | 1.1485 | |
Cloudy Homogeneous | 0.6864 | 0.7288 | 0.7860 | 1.0522 | |
Cloudy Heterogeneous | 0.7282 | 0.7538 | 0.8153 | 0.8834 | |
Overall Average | 0.7058 | 0.7414 | 0.7988 | 0.9674 |
Method | SSIM | TMQI | FSIMc | FADE |
---|---|---|---|---|
He et al. [11] | 0.8348 | 0.8631 | 0.9002 | 0.7422 |
Tarel et al. [12] | 0.7475 | 0.8000 | 0.8703 | 0.9504 |
Zhu et al. [13] | 0.7984 | 0.8206 | 0.8880 | 0.9745 |
Kim et al. [27] | 0.7520 | 0.8702 | 0.8590 | 0.8556 |
Ngo et al. [36] | 0.7691 | 0.8165 | 0.8787 | 0.7420 |
Proposed Algorithm | 0.7766 | 0.8373 | 0.8788 | 0.7325 |
Method | e | r | FADE |
---|---|---|---|
He et al. [11] | 0.39 | 1.57 | 0.56 |
Tarel et al. [12] | 1.30 | 2.15 | 0.53 |
Zhu et al. [13] | 0.78 | 1.17 | 0.83 |
Kim et al. [27] | 1.27 | 2.07 | 0.73 |
Ngo et al. [36] | 1.11 | 2.03 | 0.50 |
Proposed Algorithm | 1.16 | 2.03 | 0.46 |
Method | SSIM | TMQI | FSIMc | FADE |
---|---|---|---|---|
He et al. [11] | 0.7709 | 0.8403 | 0.8423 | 0.3719 |
Tarel et al. [12] | 0.7263 | 0.8416 | 0.7733 | 0.4013 |
Zhu et al. [13] | 0.6647 | 0.8118 | 0.7738 | 0.6531 |
Kim et al. [27] | 0.4702 | 0.6509 | 0.6869 | 1.1445 |
Ngo et al. [36] | 0.7322 | 0.8935 | 0.8219 | 0.3647 |
Proposed Algorithm | 0.7520 | 0.9017 | 0.8212 | 0.3612 |
Method | SSIM | TMQI | FSIMc | FADE |
---|---|---|---|---|
He et al. [11] | 0.6580 | 0.7319 | 0.8208 | 0.8328 |
Tarel et al. [12] | 0.7200 | 0.7740 | 0.8055 | 0.8053 |
Zhu et al. [13] | 0.6864 | 0.7512 | 0.8252 | 1.0532 |
Kim et al. [27] | 0.6424 | 0.7026 | 0.7879 | 1.7480 |
Ngo et al. [36] | 0.7600 | 0.7892 | 0.8482 | 1.1277 |
Proposed Algorithm | 0.7781 | 0.8122 | 0.8655 | 0.8556 |
Method∖Image Size | 640 × 480 | 800 × 600 | 1024 × 768 | 1920 × 1080 | 4096 × 2160 |
---|---|---|---|---|---|
He et al. [11] | 12.64 | 19.94 | 32.37 | 94.25 | 470.21 |
Tarel et al. [12] | 0.28 | 0.59 | 0.76 | 1.51 | 9.02 |
Zhu et al. [13] | 0.22 | 0.34 | 0.55 | 1.51 | 6.39 |
Kim et al. [27] | 0.16 | 0.29 | 0.43 | 1.01 | 4.81 |
Ngo et al. [36] | 0.17 | 0.31 | 0.44 | 1.03 | 5.22 |
Proposed Algorithm | 0.18 | 0.34 | 0.49 | 1.13 | 5.77 |
Xilinx Design Analyzer | |||||||||
---|---|---|---|---|---|---|---|---|---|
Device | xc7z045 - 2ffg900 | ||||||||
Design | mHMF | mHMF | |||||||
BSN-Based | OMSN-Based | BSN-Based | OMSN-Based | ||||||
Slice Logic Utilization | Available | Used | Util. | Used | Util. | Used | Util. | Used | Util. |
Slice Registers (#) | 437,200 | 4916 | 1.12% | 4,056 | 0.93% | 11,139 | 2.55% | 9344 | 2.14% |
Slice LUTs (#) | 218,600 | 4599 | 2.10% | 3771 | 1.73% | 9745 | 4.46% | 8427 | 3.85% |
Used as Memory (#) | 70,400 | 74 | 0.11% | 124 | 0.18% | 104 | 0.15% | 234 | 0.33% |
RAM36E1 / FIFO36E1s | 545 | 4 | 0.73% | 4 | 0.73% | 6 | 1.10% | 6 | 1.10% |
Minimum Period | 2.800 ns | 2.542 ns | 2.803 ns | 2.547 ns | |||||
Maximum Frequency | 357.143 MHz | 393.391 MHz | 356.761 MHz | 392.619 MHz |
Xilinx Design Analyzer | |||
---|---|---|---|
Device | xc7z045 - 2ffg900 | ||
Slice Logic Utilization | Available | Used | Utilization |
Slice Registers (#) | 437,200 | 57,848 | 13.23% |
Slice LUTs (#) | 218,600 | 53,569 | 24.51% |
RAM36E1/FIFO36E1s | 545 | 58 | 10.64% |
RAM18E1/FIFO18E1s | 1090 | 25 | 2.29% |
Minimum Period | 3.68 ns | ||
Maximum Frequency | 271.67 MHz |
Video Resolution | Frame Size | Required Clock Cycles (#) | Processing Speed (fps) | |
---|---|---|---|---|
Full HD (FHD) | 2,076,601 | 130.8 | ||
Quad HD (QHD) | 3,690,401 | 73.6 | ||
4K | UW4K | 6,149,441 | 44.2 | |
UHD TV | 8,300,401 | 32.7 | ||
DCI 4K | 8,853,617 | 30.7 |
Hardware Utilization | Park et al. [54] | Ngo et al. [18] | Proposed Design |
---|---|---|---|
Registers (#) | 53,400 | 70,864 | 57,848 |
LUTs (#) | 64,000 | 56,664 | 53,569 |
DSPs (#) | 42 | 0 | 0 |
Memory (Mbits) | 3.2 | 1.5 | 2.4 |
Maximum Processing Rate (Mpixel/s) | 88.70 | 236.29 | 271.67 |
Maximum Attainable Video Resolution | SVGA | DCI 4K | DCI 4K |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ngo, D.; Lee, S.; Lee, G.-D.; Kang, B. Single-Image Visibility Restoration: A Machine Learning Approach and Its 4K-Capable Hardware Accelerator. Sensors 2020, 20, 5795. https://doi.org/10.3390/s20205795
Ngo D, Lee S, Lee G-D, Kang B. Single-Image Visibility Restoration: A Machine Learning Approach and Its 4K-Capable Hardware Accelerator. Sensors. 2020; 20(20):5795. https://doi.org/10.3390/s20205795
Chicago/Turabian StyleNgo, Dat, Seungmin Lee, Gi-Dong Lee, and Bongsoon Kang. 2020. "Single-Image Visibility Restoration: A Machine Learning Approach and Its 4K-Capable Hardware Accelerator" Sensors 20, no. 20: 5795. https://doi.org/10.3390/s20205795
APA StyleNgo, D., Lee, S., Lee, G.-D., & Kang, B. (2020). Single-Image Visibility Restoration: A Machine Learning Approach and Its 4K-Capable Hardware Accelerator. Sensors, 20(20), 5795. https://doi.org/10.3390/s20205795