UIR-Net: A Simple and Effective Baseline for Underwater Image Restoration and Enhancement
Abstract
:1. Introduction
- 1.
- In this paper, we present a novel, end-to-end, lightweight underwater image restoration network named UIR-Net, which can generate context-rich and spatially accurate outputs.
- 2.
- UIR-Net is the first network that combines underwater image enhancement and underwater image restoration, which means this can be considered a pioneering study with great significance for practical applications and deployments.
- 3.
- This article proposes the Marine Spot Impurity Removal Benchmarking (MSIRB) dataset and UIEBD-Snow.
- 4.
2. Related Works
2.1. Underwater Image Restoration
2.2. Underwater Image Enhancement
3. Method
3.1. Channel Residual Prior
3.2. Gradient Strategy
3.3. Loss Function
4. Experimental
4.1. Datasets of Underwater Restoration
- 1.
- Marine Snow Removal Benchmarking Dataset (MSRB)In this section, we present the specifications of the ocean snow artifacts synthesized by the MSRB dataset [17], which has been mainly built for two general tasks of marine snow artifacts removal: MSR task 1 is dedicated removing small-size artifacts, while MSR task 2 is used to cope with various artifacts of different sizes.Obviously, it would be much more difficult to handle multiple sizes of underwater snow than just focusing on small-sized ones. Corresponding to each MSR task, each sub-dataset is composed of a training set with 2300 pairs of images and a test set with 400 pairs, all with a pixel resolution of 384 × 384. Each image pair includes an original underwater collected picture and one containing synthetic marine snow artifacts, original underwater collected picture is the ground truth of the dataset. Each composite image is added with N marine snow particles, while N, which is the number of added particles, is generated by a discrete uniform distribution of U{100,600}. Based on our preliminary observations, in each synthetic image, H-type and V-type ocean snow spots are randomly generated with a probability of 0.7 for H-type and 0.3 for V-type. Most of our parameters are chosen based on our observations of real-world collected images and the corresponding artificial influence of ocean snow.
- 2.
- Marine Spot Impurity Removal Benchmarking Dataset (MSIRB)At present, there is no accessible underwater dataset of ocean snow images that can be used to train and test deep neural networks to eliminate ocean snow particles, which represents a major inconvenience to relevant ongoing research on marine light spot removal algorithms. In order to present the diverse morphological features of the real-world ocean snow particles and the complex composition of ocean snow scenes, we propose a new dataset called the Marine Spot Impurity Removal Benchmarking Dataset (MSIRB). The MSIRB dataset contains (1) synthetic ocean light spot images, (2) the corresponding real images and (3) an ocean light spot mask. We reference [47] and use its mask to produce underwater light spot images, each basic mask corresponds to small, medium, and large grain sizes. The dataset is shown in Figure 4.It is important to focus on the fact that inside the MSIRB dataset’s ground truth and input binary images, there is no difference in color, only the presence or absence of light spot impurities.
- 3.
- An underwater image enhancement benchmark dataset and beyond with Snow (UIEBD-Snow)To explore the generalization of our work in image enhancement tasks, the same operation that was applied on the MSIRB dataset is performed in this paper on the UIEBD dataset, a dataset with its potential ground truth generated by adopting a dozen of cutting-edge image improvement methods. Various morphological features of real snow particles of the ocean are fused inside the UIEBD to make our new dataset called UIEBD-Snow, which contains 890 underwater images with corresponding reference maps.It is worth noting that the difference between UIEBD-Snow and MSIRB is mainly in the color correction, and the dataset example can be referred to Figure 4
4.2. Training Parameter Settings
4.3. Evaluation Metrics
- 1.
- Full-Reference EvaluationWe adopt the standard metrics (PSNR, SSIM and RMSE [48]) as full-reference evaluation.PSNR is short for Peak Signal to Noise Ratio. The better the PSNR, the less the image distortion. We can obtain the PSNR as follows:Assume that the current image I and the reference images K, H and W are the height and width of the image respectively. MSE can be calculated as follows:SSIM is short for Structural Similarity, we can obtain the SSIM of the two images (image x and image y) as follows:RMSE is short for Root Mean Square Error. We can obtain RMSE as follows:
- 2.
- Non-Reference EvaluationUIR-Net is supervised network, there is an image pair for input and the corresponding target, but we also adopt the UCIQE [49] and UIQM [50] as non-reference evaluation, which are commonly used for underwater image quality assessment.UCIQE is a linear combination of color intensity, saturation and contrast, used to quantitatively evaluate the non-homogeneous color shift, blurring and low contrast of underwater images. The better the UCIQE, the better the underwater image quality, we can obtain UCIQE as follows:UIQM is short for underwater image quality measurement. We can obtain it as follows:
4.4. Comparison with State-of-the-Art Methods on Underwater Image Restoration
4.5. Comparison with State-of-the-Art Methods on Underwater Image Enhancement
4.6. Qualitative Evaluations
4.6.1. Quantitative Comparisons of Underwater Image Restoration on the MSRB Dataset
4.6.2. Quantitative Comparisons of Underwater Restoration on MSIRB Dataset
4.6.3. Quantitative Comparisons of Underwater Enhancement on UIEBD-Snow Dataset
4.6.4. FLOPs and Params Comparisons with MPRNet and DGUNet
4.7. Ablation Study
4.7.1. Ablation Study of Channel Prior and Gradient Descent
4.7.2. Ablation Study of
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hambarde, P.; Murala, S.; Dhall, A. UW-GAN: Single-Image Depth Estimation and Image Enhancement for Underwater Images. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
- Tao, Y.; Dong, L.; Xu, L.; Xu, W. Effective solution for underwater image enhancement. Opt. Express 2021, 29, 32412–32438. [Google Scholar] [CrossRef] [PubMed]
- Yuan, J.; Cao, W.; Cai, Z.; Su, B. An Underwater Image Vision Enhancement Algorithm Based on Contour Bougie Morphology. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8117–8128. [Google Scholar] [CrossRef]
- Jian, M.J.; Li, D.; Zhang, J.Q. Underwater Image Restoration Based on Non-Uniform Incident Light Imaging Model. Acta Opt. Sin. 2021, 41, 1501003. [Google Scholar]
- Zhao, X.; Jin, T.; Qu, S. Deriving inherent optical properties from background color and underwater image enhancement. Ocean. Eng. 2015, 94, 163–172. [Google Scholar] [CrossRef]
- Drews, P.; Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops (ICCVW), Sydney, NSW, Australia, 1–8 December 2013; IEEE: New York, NY, USA, 2013; pp. 825–830. [Google Scholar]
- Zhang, X.; Chen, F.; Wang, C.; Tao, M.; Jiang, G.P. Sienet: Siamese expansion network for image extrapolation. IEEE Signal Process. Lett. 2020, 27, 1590–1594. [Google Scholar] [CrossRef]
- Yan, S.; Zhang, X. PCNet: Partial convolution attention mechanism for image inpainting. Int. J. Comput. Appl. 2021, 44, 738–745. [Google Scholar] [CrossRef]
- Zhang, X.F.; Gu, C.C.; Zhu, S.Y. SpA-Former: Transformer image shadow detection and removal via spatial attention. arXiv 2022, arXiv:2206.10910. [Google Scholar]
- Zhang, X.; Wu, S.; Ding, H.; Li, Z. Image extrapolation based on multi-column convolutional attention network. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; Volume 1, pp. 1938–1942. [Google Scholar]
- Shen, R.; Zhang, X.; Xiang, Y. AFFNet: Attention mechanism network based on fusion feature for image cloud removal. Int. J. Pattern Recognit. Artif. Intell. 2022, 36, 2254014. [Google Scholar] [CrossRef]
- Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 7159–7165. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2017, 3, 387–394. [Google Scholar] [CrossRef] [Green Version]
- Jiang, Z.; Li, Z.; Yang, S.; Fan, X.; Liu, R. Target Oriented Perceptual Adversarial Fusion Network for Underwater Image Enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6584–6598. [Google Scholar] [CrossRef]
- Jiao, Q.; Liu, M.; Li, P.; Dong, L.; Hui, M.; Kong, L.; Zhao, Y. Underwater image restoration via non-convex non-smooth variation and thermal exchange optimization. J. Mar. Sci. Eng. 2021, 9, 570. [Google Scholar] [CrossRef]
- Sato, Y.; Ueda, T.; Tanaka, Y. Marine Snow Removal Benchmarking Dataset. arXiv 2021, arXiv:2103.14249. [Google Scholar]
- Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Qian, L.J.; Jin, H.H.; Fan, Z.G.; Zhuang, Z.J.; Gong, K.Q. Underwater Image Restoration Method Suppressing Interference of Light Source in Field of View. Acta Opt. Sin. 2021, 41, 1801001. [Google Scholar]
- Knauer, G.; Hebel, D.; Cipriano, F. Marine snow: Major site of primary production in coastal waters. Nature 1982, 300, 630–631. [Google Scholar] [CrossRef]
- Farhadifard, F.; Radolko, M.; von Lukas, U.F. Single image marine snow removal based on a supervised median filtering scheme. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Porto, Portugal, 27 February–1 March 2017; pp. 280–287. [Google Scholar]
- Banerjee, S.; Sanyal, G.; Ghosh, S.; Ray, R.; Shome, S.N. Elimination of marine snow effect from underwater image-an adaptive probabilistic approach. In Proceedings of the IEEE Students’ Conference on Electrical, Electronics and Computer Science, Bhopal, India, 1–2 March 2014; pp. 1–4. [Google Scholar]
- Farhadifard, F.; Radolko, M.; von Lukas, U.F. Marine snow detection and removal: Underwater image restoration using background modeling. In Proceedings of the 25th International Conference in Central Europe on Computer Graphics, Visualization and Computer Visionin Co-Operation with Eurographics Association, Prague, Czech Republic, 29 May–2 June 2017; pp. 81–89. [Google Scholar]
- Cyganek, B.; Gongola, K. Real-time marine snow noise removal from underwater video sequences. J. Electron. Imaging 2018, 27, 043002. [Google Scholar] [CrossRef]
- Wang, Y.; Yu, X.; An, D.; Wei, Y. Underwater image enhancement and marine snow removal for fishery based on integrated dual-channel neural network. Comput. Electron. Agric. 2021, 186, 106182. [Google Scholar] [CrossRef]
- Fu, X.; Huang, J.; Zeng, D.; Huang, Y.; Ding, X.; Paisley, J. Removing rain from single images via a deep detail network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3855–3863. [Google Scholar]
- Ren, D.; Zuo, W.; Hu, Q.; Zhu, P.; Meng, D. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3937–3946. [Google Scholar]
- Tu, Z.; Talebi, H.; Zhang, H.; Yang, F.; Milanfar, P.; Bovik, A.; Li, Y. Maxim: Multi-axis mlp for image processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 5769–5780. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 5728–5739. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 14821–14831. [Google Scholar]
- Mou, C.; Wang, Q.; Zhang, J. Deep Generalized Unfolding Networks for Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 17399–17410. [Google Scholar]
- Wei, G.Y.Z.; Chen, S.Y.; Liu, Y.T. Survey of underwater image enhancement and restoration algorithms. Appl. Res. Comput. 2021, 38, 2561–2569, 2589. [Google Scholar]
- Perez, J.; Attanasio, A.C.; Nechyporenko, N.; Sanz, P.J. A deep learning approach for underwater image enhancement. In Proceedings of the 2017 International Work-Conference on the Interplay Between Natural and Artificial Computation (IWINAC), Corunna, Spain, 19–23 June 2017; pp. 183–192. [Google Scholar]
- Wang, Y.; Zhang, J.; Cao, Y.; Wang, Z. A deep CNN method for underwater image enhancement. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1382–1386. [Google Scholar]
- Li, C.; Anwar, S.; Porikli, F. Underwater scenes prior inspired deep underwater image and video enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
- Wang, N.; Zheng, H.; Zheng, B. Underwater Image Restoration via Maximum Attenuation Identification. IEEE Access 2017, 5, 18941–18952. [Google Scholar] [CrossRef]
- Peng, Y.-T.; Cao, K.; Cosman, P.C. Generalization of the Dark Channel Prior for Single Image Restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef] [PubMed]
- Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial results in underwater single image dehazing. In Proceedings of the MTS/IEEE Seattle, OCEANS 2010, Seattle, WA, USA, 20–23 September 2010; IEEE: New York, NY, USA, 2010; pp. 1–8. [Google Scholar]
- Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration. In Proceedings of the 2018 Pacific Rim Conference on Multimedia (PCM), Hefei, China, 21–22 September 2018; Springer: Berlin, Germany, 2018; pp. 678–688. [Google Scholar]
- Peng, Y.-T.; Cosman, P.C. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
- Zhang, B.; Jin, S.; Xia, Y.; Huang, Y.; Xiong, Z. Attention Mechanism Enhanced Kernel Prediction Networks for Denoising of Burst Images. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2083–2087. [Google Scholar]
- Chen, D.; He, Z.; Cao, Y.; Yang, J.; Cao, Y.; Yang, M.Y.; Zhuang, Y. Deep neural network for fast and accurate single image super-resolution via channel-attention-based fusion of orientation-aware features. arXiv 2019, arXiv:1912.04016. [Google Scholar]
- Li, H.; Qiu, K.; Chen, L.; Mei, X.; Hong, L.; Tao, C. SCAttNet: Semantic segmentation network with spatial and channel attention mechanism for high-resolution remote sensing images. IEEE Geosci. Remote Sens. Lett. 2020, 18, 905–909. [Google Scholar] [CrossRef]
- Li, R.; Tan, R.T.; Cheong, L.F. Robust optical flow in rainy scenes. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Volume 2, pp. 288–304. [Google Scholar]
- Garg, K.; Nayar, S.K. Photometric model of a rain drop. In CMU Technical Report; Citeseer: Princeton, NJ, USA, 2003; Volume 4. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 694–711. [Google Scholar]
- Chen, W.T.; Fang, H.Y.; Ding, J.J.; Tsai, C.C.; Kuo, S.Y. JSTASR: Joint size and transparency-aware snow removal algorithm based on modified partial convolution and veiling effect removal. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 754–770. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
- Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Oceanic Eng. 2016, 41, 541–551. [Google Scholar] [CrossRef]
- Fu, X.; Huang, J.; Ding, X.; Liao, Y.; Paisley, J. Clearing the skies: A deep network architecture for single-image rain removal. IEEE Trans. Image Process. 2017, 26, 2944–2956. [Google Scholar] [CrossRef] [Green Version]
- Fu, Z.; Wang, W.; Huang, Y.; Ding, X.; Ma, K.-K. Uncertainty Inspired Underwater Image Enhancement. In Proceedings of the European Conference on Computer Vision(ECCV), Tel Aviv, Israel, 23–27 October 2022; Springer: Cham, Switzerland, 2022; pp. 465–482. [Google Scholar]
- Han, J.; Shoeiby, M.; Malthus, T.; Botha, E.; Anstee, J.; Anwar, S.; Wei, R.; Petersson, L.; Armin, M.A. Single underwater image restoration by contrastive learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2385–2388. [Google Scholar]
- Chen, X.; Zhang, P.; Quan, L.; Yi, C.; Lu, C. Underwater image enhancement based on deep learning and image formation model. arXiv 2021, arXiv:2101.00991. [Google Scholar]
- Islam, M.J.; Xia, Y.; Sattar, J. Fast underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 2020, 5, 3227–3234. [Google Scholar] [CrossRef]
Method | PSNR | SSIM | RMSE | UIQM | UICQE |
---|---|---|---|---|---|
Nature image restoration | |||||
AirNet [51] | 24.359 | 0.954 | 6.745 | 3.418 | 0.549 |
DB-ResNet [26] | 23.258 | 0.920 | 9.060 | 3.531 | 0.553 |
PReNet [27] | 21.837 | 0.899 | 13.932 | 3.722 | 0.524 |
Maxim [28] | 24.578 | 0.931 | 7.836 | 3.606 | 0.553 |
Restormer [29] | 24.411 | 0.954 | 6.783 | 3.977 | 0.550 |
DGUNet [31] | 33.292 | 0.977 | 3.408 | 3.755 | 0.567 |
MPRNet [30] | 35.322 | 0.984 | 2.659 | 3.715 | 0.566 |
Our | 34.464 | 0.984 | 2.649 | 3.877 | 0.576 |
Method | PSNR | SSIM | RMSE | UIQM | UICQE |
---|---|---|---|---|---|
Nature image restoration | |||||
AirNet [51] | 20.536 | 0.922 | 9.878 | 3.761 | 0.575 |
DB-ResNet [26] | 20.506 | 0.897 | 10.743 | 3.758 | 0.574 |
PReNet [27] | 20.310 | 0.888 | 14.237 | 4.075 | 0.550 |
Maxim [28] | 23.599 | 0.924 | 8.316 | 3.832 | 0.565 |
Restormer [29] | 20.396 | 0.927 | 9.817 | 3.810 | 0.574 |
DGUNet [31] | 33.785 | 0.984 | 2.584 | 3.911 | 0.562 |
MPRNet [30] | 33.561 | 0.985 | 2.592 | 3.725 | 0.554 |
Our | 34.513 | 0.986 | 2.391 | 3.837 | 0.565 |
Method | PSNR | SSIM | RMSE | UIQM | UICQE |
---|---|---|---|---|---|
Underwater image enhancement | |||||
PUIE [52] | 16.926 | 0.723 | 18.030 | 3.594 | 0.606 |
CWR [53] | 15.374 | 0.556 | 23.276 | 4.620 | 0.507 |
IMF [54] | 17.006 | 0.707 | 18.904 | 3.354 | 0.633 |
FUnIE-GAN [55] | 15.619 | 0.486 | 24.762 | 5.106 | 0.596 |
Nature image restoration | |||||
MPRNet [30] | 20.027 | 0.775 | 15.048 | 3.400 | 0.598 |
DGUNet [31] | 20.711 | 0.795 | 14.068 | 3.363 | 0.597 |
Our | 21.200 | 0.807 | 13.142 | 3.610 | 0.596 |
Method | FLOPs | Total Params | Training Time |
---|---|---|---|
DGUNet [31] | 12,176,119 | 94,600 s | |
MPRNet [30] | 3,637,249 | 41,400 s | |
UIR-Net | 3,632,740 | 22,000 s |
residual prior | proximal gradient descent | PSNR | SSIM | RMSE |
33.319 | 0.981 | 2.953 | ||
√ | 33.319 | 0.981 | 2.953 | |
√ | 34.356 | 0.981 | 2.931 | |
√ | √ | 34.464 | 0.984 | 2.649 |
residual prior | proximal gradient descent | PSNR | SSIM | RMSE |
32.477 | 0.982 | 2.912 | ||
√ | 33.295 | 0.983 | 2.649 | |
√ | 34.121 | 0.984 | 2.765 | |
√ | √ | 34.513 | 0.986 | 2.391 |
Paramater | PSNR | SSIM | RMSE |
---|---|---|---|
= 0.1 | 20.203 | 0.781 | 14.030 |
= 0.05 | 19.371 | 0.791 | 13.301 |
= 0.02 | 20.039 | 0.797 | 13.904 |
= 0.01 | 21.200 | 0.807 | 13.142 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mei, X.; Ye, X.; Zhang, X.; Liu, Y.; Wang, J.; Hou, J.; Wang, X. UIR-Net: A Simple and Effective Baseline for Underwater Image Restoration and Enhancement. Remote Sens. 2023, 15, 39. https://doi.org/10.3390/rs15010039
Mei X, Ye X, Zhang X, Liu Y, Wang J, Hou J, Wang X. UIR-Net: A Simple and Effective Baseline for Underwater Image Restoration and Enhancement. Remote Sensing. 2023; 15(1):39. https://doi.org/10.3390/rs15010039
Chicago/Turabian StyleMei, Xinkui, Xiufen Ye, Xiaofeng Zhang, Yusong Liu, Junting Wang, Jun Hou, and Xuli Wang. 2023. "UIR-Net: A Simple and Effective Baseline for Underwater Image Restoration and Enhancement" Remote Sensing 15, no. 1: 39. https://doi.org/10.3390/rs15010039
APA StyleMei, X., Ye, X., Zhang, X., Liu, Y., Wang, J., Hou, J., & Wang, X. (2023). UIR-Net: A Simple and Effective Baseline for Underwater Image Restoration and Enhancement. Remote Sensing, 15(1), 39. https://doi.org/10.3390/rs15010039