Semantic-Guided Iterative Detail Fusion Network for Single-Image Deraining
Abstract
:1. Introduction
- The degradation forms, shapes, and distributions of synthetic rain maps are significantly less diverse and complex than those found in real rain maps. As a result, networks trained on synthetic data often exhibit reduced robustness when faced with actual rain conditions.
- Pixel-by-pixel comparisons between the output image and ground truth tend to cause overfitting, preventing the network from effectively learning the degradation patterns and semantic information inherent in real scenes.
- (1)
- We utilize neural representations to obtain normalized degraded images and measure the gap between the fitted and original images to derive a degradation position indication matrix that guides detail extraction.
- (2)
- A semantic loss function is computed using a specialized semantic information extraction branch, which is designed to better capture partially obscured semantic information within the image content.
- (3)
- An iterative semantic-guided detail fusion module that progressively introduces details guided by the inherent information within the image itself, facilitating detail integration.
- (4)
- We present an effective training strategy for handling imperfectly matched real images, leveraging the semantic loss function and freezing the detail branch to prevent overfitting issues that may arise from pixel-wise comparisons.
2. Related Work
2.1. Single Image Deraining
2.2. Transformer and Attention
2.3. Neural Representation for Image Restoration
3. Proposed Method
3.1. Architecture
3.2. Mask Based on Implicit Neural Representations
3.3. Iterative Semantic-Guided Detail Fusion Module
3.4. Loss Function
4. Experiments
4.1. Implementation Specifications
4.2. Datasets
4.2.1. RainDS
4.2.2. Real-World Rainy Images Dataset
4.2.3. RSDV
4.2.4. RainDS-Low-Light
4.3. Comparisons to Existing Methods
4.4. Ablation Studies
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Jiang, Y.; Zhu, B.; Zhao, X.; Deng, W. Pixel-wise content attention learning for single-image deraining of autonomous vehicles. Expert Syst. Appl. 2023, 224, 119990. [Google Scholar] [CrossRef]
- Chen, X.; Wei, C.; Yang, Y.; Luo, L.; Biancardo, S.A.; Mei, X. Personnel trajectory extraction from port-like videos under varied rainy interferences. IEEE Trans. Intell. Transp. Syst. 2024, 25, 6567–6579. [Google Scholar] [CrossRef]
- Munsif, M.; Khan, S.U.; Khan, N.; Baik, S.W. Attention-based deep learning framework for action recognition in a dark environment. Hum. Centric Comput. Inf. Sci 2024, 14, 1–22. [Google Scholar]
- Munsif, M.; Afridi, H.; Ullah, M.; Khan, S.D.; Alaya Cheikh, F.; Sajjad, M. A Lightweight Convolution Neural Network for Automatic Disasters Recognition. In Proceedings of the 2022 10th European Workshop on Visual Information Processing (EUVIP), Lisbon, Portugal, 11–14 September 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Yi, Q.; Li, J.; Dai, Q.; Fang, F.; Zhang, G.; Zeng, T. Structure-preserving deraining with residue channel prior guidance. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 4238–4247. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14821–14831. [Google Scholar]
- Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Huang, B.; Luo, Y.; Ma, J.; Jiang, J. Multi-scale progressive fusion network for single image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8346–8355. [Google Scholar]
- Sivaanpu, A.; Thanikasalam, K. A Dual CNN Architecture for Single Image Raindrop and Rain Streak Removal. In Proceedings of the IEEE 2022 7th International Conference on Information Technology Research (ICITR), Moratuwa, Sri Lanka, 7–8 December 2022; pp. 1–6. [Google Scholar]
- Huang, J.; Liu, Y.; Zhao, F.; Yan, K.; Zhang, J.; Huang, Y.; Zhou, M.; Xiong, Z. Deep fourier-based exposure correction network with spatial-frequency interaction. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 163–180. [Google Scholar]
- Wang, C.; Xing, X.; Wu, Y.; Su, Z.; Chen, J. Dcsfn: Deep cross-scale fusion network for single image rain removal. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1643–1651. [Google Scholar]
- Wang, C.; Wu, Y.; Su, Z.; Chen, J. Joint self-attention and scale-aggregation for self-calibrated deraining network. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 2517–2525. [Google Scholar]
- Wang, C.; Pan, J.; Wu, X.M. Online-updated high-order collaborative networks for single image deraining. In Proceedings of the AAAI Conference on Artificial Intelligence, Pomona, CA, USA, 24–28 October 2022; Volume 36, pp. 2406–2413. [Google Scholar]
- Babar, K.; Yaseen, M.U.; Al-Shamayleh, A.S.; Imran, M.; Al-Ghushami, A.H.; Akhunzada, A. LPN-IDD: A Lightweight Pyramid Network for Image Deraining and Detection. IEEE Access 2024, 12, 37103–37119. [Google Scholar] [CrossRef]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Guo, Q.; Sun, J.; Juefei-Xu, F.; Ma, L.; Xie, X.; Feng, W.; Liu, Y.; Zhao, J. Efficientderain: Learning pixel-wise dilation filtering for high-efficiency single-image deraining. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 1487–1495. [Google Scholar]
- Yang, H.; Zhou, D.; Li, M.; Zhao, Q. A two-stage network with wavelet transformation for single-image deraining. Vis. Comput. 2023, 39, 3887–3903. [Google Scholar] [CrossRef]
- Ragini, T.; Prakash, K.; Cheruku, R. DeTformer: A Novel Efficient Transformer Framework for Image Deraining. Circuits Syst. Signal Process. 2024, 43, 1030–1052. [Google Scholar] [CrossRef]
- Chen, X.; Pan, J.; Dong, J.; Tang, J. Towards unified deep image deraining: A survey and a new benchmark. arXiv 2023, arXiv:2310.03535. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
- Chen, S.; Ye, T.; Bai, J.; Chen, E.; Shi, J.; Zhu, L. Sparse Sampling Transformer with Uncertainty-Driven Ranking for Unified Removal of Raindrops and Rain Streaks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 13106–13117. [Google Scholar]
- Wang, C.; Pan, J.; Wang, W.; Dong, J.; Wang, M.; Ju, Y.; Chen, J. Promptrestorer: A prompting image restoration method with degradation perception. Adv. Neural Inf. Process. Syst. 2023, 36, 8898–8912. [Google Scholar]
- Chen, X.; Pan, J.; Dong, J. Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024; pp. 25627–25636. [Google Scholar]
- Quan, R.; Yu, X.; Liang, Y.; Yang, Y. Removing Raindrops and Rain Streaks in One Go. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 9147–9156. [Google Scholar]
- Shao, M.W.; Li, L.; Meng, D.Y.; Zuo, W.M. Uncertainty Guided Multi-Scale Attention Network for Raindrop Removal From a Single Image. IEEE Trans. Image Process. 2021, 30, 4828–4839. [Google Scholar] [CrossRef] [PubMed]
- Zhang, K.; Li, D.; Luo, W.; Ren, W. Dual Attention-in-Attention Model for Joint Rain Streak and Raindrop Removal. IEEE Trans. Image Process. 2021, 30, 7608–7619. [Google Scholar] [CrossRef] [PubMed]
- Luo, Y.; Xu, Y.; Ji, H. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3397–3405. [Google Scholar]
- Li, Y.; Tan, R.T.; Guo, X.; Lu, J.; Brown, M.S. Rain streak removal using layer priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2736–2744. [Google Scholar]
- Li, Y.; Lu, J.; Chen, H.; Wu, X.; Chen, X. Dilated Convolutional Transformer for High-Quality Image Deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Vancouver, BC, Canada, 17–24 June 2023; pp. 4199–4207. [Google Scholar]
- Wang, H.; Xie, Q.; Zhao, Q.; Li, Y.; Liang, Y.; Zheng, Y.; Meng, D. RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 8668–8682. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Li, H.; Li, M.; Pan, J. Learning a Sparse Transformer Network for Effective Image Deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 5896–5905. [Google Scholar]
- Xia, C.; Wang, X.; Lv, F.; Hao, X.; Shi, Y. Vit-comer: Vision transformer with convolutional multi-scale feature interaction for dense predictions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 5493–5502. [Google Scholar]
- Shen, X.; Yang, Z.; Wang, X.; Ma, J.; Zhou, C.; Yang, Y. Global-to-Local Modeling for Video-Based 3D Human Pose and Shape Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2023; pp. 8887–8896. [Google Scholar]
- Liu, Y.; Schiele, B.; Vedaldi, A.; Rupprecht, C. Continual Detection Transformer for Incremental Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 23799–23808. [Google Scholar]
- Li, Y.; Fan, Y.; Xiang, X.; Demandolx, D.; Ranjan, R.; Timofte, R.; Van Gool, L. Efficient and Explicit Modelling of Image Hierarchies for Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 18278–18289. [Google Scholar]
- Strümpler, Y.; Postels, J.; Yang, R.; Gool, L.V.; Tombari, F. Implicit neural representations for image compression. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 74–91. [Google Scholar]
- Zheng, M.; Yang, H.; Huang, D.; Chen, L. Imface: A nonlinear 3d morphable face model with implicit neural representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 20343–20352. [Google Scholar]
- Barron, J.T.; Mildenhall, B.; Tancik, M.; Hedman, P.; Martin-Brualla, R.; Srinivasan, P.P. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 5855–5864. [Google Scholar]
- Biswal, M.; Shao, T.; Rose, K.; Yin, P.; Mccarthy, S. StegaNeRV: Video Steganography using Implicit Neural Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 888–898. [Google Scholar]
- Lu, Y.; Wang, Z.; Liu, M.; Wang, H.; Wang, L. Learning spatial-temporal implicit neural representations for event-guided video super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 1557–1567. [Google Scholar]
- Yang, S.; Ding, M.; Wu, Y.; Li, Z.; Zhang, J. Implicit Neural Representation for Cooperative Low-light Image Enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 12918–12927. [Google Scholar]
- Zhang, H.; Sindagi, V.; Patel, V.M. Image de-raining using a conditional generative adversarial network. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 3943–3956. [Google Scholar] [CrossRef]
- Chen, H.; Ren, J.; Gu, J.; Wu, H.; Lu, X.; Cai, H.; Zhu, L. Snow Removal in Video: A New Dataset and A Novel Method. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 13165–13176. [Google Scholar]
- Gu, S.; Meng, D.; Zuo, W.; Zhang, L. Joint convolutional analysis and synthesis sparse representation for single image layer separation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1708–1716. [Google Scholar]
- Xiao, J.; Fu, X.; Liu, A.; Wu, F.; Zha, Z.J. Image De-raining Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 12978–12995. [Google Scholar] [CrossRef] [PubMed]
- Ba, Y.; Zhang, H.; Yang, E.; Suzuki, A.; Pfahnl, A.; Chandrappa, C.C.; de Melo, C.M.; You, S.; Soatto, S.; Wong, A.; et al. Not Just Streaks: Towards Ground Truth for Single Image Deraining. In Proceedings of the Computer Vision—ECCV 2022, Tel Aviv, Israel, 23–27 October 2022; Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T., Eds.; Springer Nature: Cham, Switzerland, 2022; pp. 723–740. [Google Scholar]
- Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baselines for image restoration. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 17–33. [Google Scholar]
- Li, B.; Liu, X.; Hu, P.; Wu, Z.; Lv, J.; Peng, X. All-in-one image restoration for unknown corruption. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17452–17462. [Google Scholar]
- Chen, Z.; He, Z.; Lu, Z.M. DEA-Net: Single Image Dehazing Based on Detail-Enhanced Convolution and Content-Guided Attention. IEEE Trans. Image Process. 2024, 33, 1002–1015. [Google Scholar] [CrossRef] [PubMed]
- Gao, T.; Wen, Y.; Zhang, K.; Zhang, J.; Chen, T.; Liu, L.; Luo, W. Frequency-Oriented Efficient Transformer for All-in-One Weather-Degraded Image Restoration. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 1886–1899. [Google Scholar] [CrossRef]
- Chen, S.; Ye, T.; Liu, Y.; Chen, E. SnowFormer: Context interaction transformer with scale-awareness for single image desnowing. arXiv 2022, arXiv:2208.09703. [Google Scholar]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17683–17693. [Google Scholar]
- Dudhane, A.; Thawakar, O.; Zamir, S.W.; Khan, S.; Khan, F.S.; Yang, M.H. Dynamic Pre-training: Towards Efficient and Scalable All-in-One Image Restoration. arXiv 2024, arXiv:2404.02154. [Google Scholar]
- Özdenizci, O.; Legenstein, R. Restoring vision in adverse weather conditions with patch-based denoising diffusion models. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10346–10357. [Google Scholar] [CrossRef] [PubMed]
Method | RainDS-syn | RainDS-real | #Param | FLOPs | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RS | RD | RDS | RS | RD | RDS | |||||||||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |||
GMM (2016) [27] | 26.66 | 0.781 | 23.04 | 0.793 | 21.50 | 0.669 | 23.73 | 0.560 | 18.60 | 0.554 | 21.35 | 0.576 | (traditional method) | |
JCAS (2017) [43] | 26.46 | 0.786 | 23.15 | 0.811 | 20.91 | 0.671 | 24.04 | 0.556 | 18.18 | 0.555 | 21.22 | 0.585 | (traditional method) | |
JRSRD (2021) [25] | 29.24 | 0.905 | 28.52 | 0.926 | 23.67 | 0.758 | 20.17 | 0.688 | 20.26 | 0.672 | 18.41 | 0.605 | 7.20 M | 24.60 G |
IDT (2022) [44] | 36.56 | 0.972 | 33.97 | 0.975 | 29.74 | 0.924 | 26.88 | 0.741 | 24.64 | 0.695 | 18.48 | 0.552 | 16.00 M | 61.19 G |
CCN (2021) [23] | 35.12 | 0.970 | 33.29 | 0.975 | 28.75 | 0.921 | 26.83 | 0.737 | 24.81 | 0.701 | 18.74 | 0.556 | 3.75 M | 245.85 G |
DRSformer (2023) [30] | 29.32 | 0.921 | 27.96 | 0.901 | 24.15 | 0.731 | 26.63 | 0.703 | 24.25 | 0.696 | 18.96 | 0.525 | 33.7 M | 242.9 G |
GT-Rain (2022) [45] | 26.23 | 0.741 | 25.74 | 0.722 | 22.19 | 0.672 | 26.98 | 0.712 | 24.59 | 0.699 | 19.07 | 0.508 | 2.29 M | 29.6 G |
NAFNet (2022) [46] | 36.34 | 0.968 | 32.33 | 0.975 | 29.00 | 0.868 | 26.76 | 0.737 | 24.93 | 0.704 | 19.56 | 0.607 | 40.60 M | 16.19 G |
NeRD-Rain (2024) [22] | 37.33 | 0.978 | 35.49 | 0.976 | 31.63 | 0.939 | 26.43 | 0.736 | 24.95 | 0.705 | 20.17 | 0.622 | 10.53 M | 79.2 G |
Restormer (2022) [19] | 36.86 | 0.977 | 34.97 | 0.976 | 31.43 | 0.936 | 26.66 | 0.740 | 25.02 | 0.702 | 21.57 | 0.632 | 26.10 M | 140.99 G |
UDR-Former (2023) [20] | 37.28 | 0.976 | 34.96 | 0.979 | 32.56 | 0.961 | 27.29 | 0.739 | 25.63 | 0.708 | 22.05 | 0.635 | 8.53 M | 21.58 G |
INR-ISDF | 37.31 | 0.975 | 35.03 | 0.978 | 32.68 | 0.960 | 27.45 (+0.16) | 0.747 (+0.008) | 26.29 (+0.66) | 0.711 (+0.003) | 23.72 (+1.67) | 0.650 (+0.015) | 12.28 M | 135.86 G |
Method | RSVD | Method | RSVD | ||
---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | ||
NeRD-Rain [22] | 23.327 | 0.904 | UDR-Former [20] | 24.655 | 0.914 |
AirNet [47] | 23.530 | 0.898 | DEA-Net [48] | 24.746 | 0.909 |
AIRFormer [49] | 24.132 | 0.904 | SnowFormer [50] | 24.891 | 0.908 |
Uformer [51] | 24.327 | 0.900 | DyNet [52] | 24.949 | 0.916 |
WeatherDiff [53] | 24.428 | 0.910 | INR-ISDF | 25.240 | 0.915 |
Method | RainDS-syn | RainDS-real | ||
---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | |
Baseline | 32.21 | 0.957 | 21.89 | 0.626 |
+ INR | 32.23 (+0.02) | 0.959 (+0.002) | 23.17 (+1.28) | 0.633 (+0.007) |
+ IF-Block × 1 | 32.34 (+0.13) | 0.960 (+0.003) | 22.46 (+0.57) | 0.642 (+0.016) |
+ IF-Block × 3 | 32.67 (+0.46) | 0.958 (+0.001) | 22.98 (+1.09) | 0.638 (+0.012) |
INR-ISDF | 32.68 (+0.47) | 0.960 (+0.003) | 23.72 (+1.83) | 0.650 (+0.024) |
Method | RainDS-syn | RainDS-real | ||
---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | |
32.53 | 0.958 | 23.61 | 0.648 | |
+ | 32.62 (+0.09) | 0.959 (+0.001) | 23.65 (+0.05) | 0.648 (+0) |
+ | 32.60 (+0.07) | 0.960 (+0.002) | 23.64 (+0.04) | 0.649 (+0.001) |
INR-ISDF | 32.68 (+0.15) | 0.960 (+0.002) | 23.72 (+0.11) | 0.650 (+0.002) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Z.; Xu, L.; Rong, W.; Yao, X.; Chen, T.; Zhao, P.; Chen, Y. Semantic-Guided Iterative Detail Fusion Network for Single-Image Deraining. Electronics 2024, 13, 3634. https://doi.org/10.3390/electronics13183634
Wang Z, Xu L, Rong W, Yao X, Chen T, Zhao P, Chen Y. Semantic-Guided Iterative Detail Fusion Network for Single-Image Deraining. Electronics. 2024; 13(18):3634. https://doi.org/10.3390/electronics13183634
Chicago/Turabian StyleWang, Zijian, Lulu Xu, Wen Rong, Xinpeng Yao, Ting Chen, Peng Zhao, and Yuxiu Chen. 2024. "Semantic-Guided Iterative Detail Fusion Network for Single-Image Deraining" Electronics 13, no. 18: 3634. https://doi.org/10.3390/electronics13183634
APA StyleWang, Z., Xu, L., Rong, W., Yao, X., Chen, T., Zhao, P., & Chen, Y. (2024). Semantic-Guided Iterative Detail Fusion Network for Single-Image Deraining. Electronics, 13(18), 3634. https://doi.org/10.3390/electronics13183634