Rain-Resilient Image Restoration for Reliable and Sustainable Visual Monitoring in Industrial Inspection and Quality Control
Abstract
1. Introduction
- We propose the Degradation-Background Perception Network (DBPNet), a comprehensive framework that addresses rain degradation by leveraging frequency-domain characteristics and depth-based background priors.
- The Frequency Degradation Perception Module (FDPM) is proposed to explicitly decompose image features into high- and low-frequency components, facilitating targeted degradation modeling and feature refinement through a cross-attention mechanism.
- We design the Depth Background Perception Module (DBPM), which integrates depth priors extracted from Depth-Anything to guide the reconstruction of background details, showcasing the robustness of depth-aware background modeling in adverse weather conditions.
- We propose the Selective Focus Attention (SFA) module, which aligns frequency-domain features with depth priors, selectively emphasizing key features to achieve more accurate rain removal and image reconstruction.
2. Related Work
2.1. Single Image Deraining
2.2. Frequency Domain in Image Restoration
3. Proposed Methodology
3.1. Frequency Degradation Perception Module
3.2. Depth Background Perception Module
3.3. Loss Function
3.4. Implementation Details
4. Experiments
4.1. Experiment Datasets
4.2. Evaluation Metrics
4.3. Experiment Results
5. Ablation Study
5.1. The Effectiveness of Each Component
- Baseline (w/o FDPM and DBPM): A version of DBPNet that removes both the Frequency Degradation Perception Module (FDPM) and Depth Background Perception Module (DBPM), relying only on spatial features for rain removal.
- Baseline + FDPM: Adds FDPM to the baseline to evaluate the impact of frequency-domain decomposition and cross-attention on degradation feature extraction.
- Baseline + DBPM: Adds DBPM to the baseline to assess the role of depth-based background priors in guiding reconstruction.
- Full DBPNet w/o SFA: Removes the Selective Focus Attention (SFA) module from the full DBPNet, to test the importance of selectively enhancing frequency-background interactions.
- Full DBPNet: The complete model with all proposed modules enabled.
5.2. Frequency Mask Design Analysis
- High-Frequency-Only Mask: A mask that only retains high-frequency components while discarding low-frequency components.
- Low-Frequency-Only Mask: A mask that retains only low-frequency components, ignoring high-frequency details.
5.3. Depth Representation Alternatives
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Fang, W.; Zhang, G.; Zheng, Y.; Chen, Y. Multi-Task Learning for UAV Aerial Object Detection in Foggy Weather Condition. Remote Sens. 2023, 15, 4617. [Google Scholar] [CrossRef]
- Zhang, G.; Liu, T.; Fang, W.; Zheng, Y. Vision Transformer based Random Walk for Group Re-Identification. arXiv 2024, arXiv:2410.05808. [Google Scholar]
- Garg, K.; Nayar, S.K. Detection and removal of rain from videos. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; CVPR 2004. IEEE: New York, NY, USA, 2004; Volume 1, p. I. [Google Scholar]
- Li, Y.; Tan, R.T.; Guo, X.; Lu, J.; Brown, M.S. Rain streak removal using layer priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2736–2744. [Google Scholar]
- Ding, X.; Chen, L.; Zheng, X.; Huang, Y.; Zeng, D. Single image rain and snow removal via guided L0 smoothing filter. Multimed. Tools Appl. 2016, 75, 2697–2712. [Google Scholar] [CrossRef]
- Reynolds, D.A. Gaussian mixture models. Encycl. Biom. 2009, 741, 3. [Google Scholar]
- Tošić, I.; Frossard, P. Dictionary learning. IEEE Signal Process. Mag. 2011, 28, 27–38. [Google Scholar] [CrossRef]
- Zhang, H.; Patel, V.M. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 695–704. [Google Scholar]
- Ren, C.; Yan, D.; Cai, Y.; Li, Y. Semi-swinderain: Semi-supervised image deraining network using swin transformer. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
- Wang, T.; Yang, X.; Xu, K.; Chen, S.; Zhang, Q.; Lau, R.W. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12270–12279. [Google Scholar]
- Qian, R.; Tan, R.T.; Yang, W.; Su, J.; Liu, J. Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2482–2491. [Google Scholar]
- Wang, Q.; Jiang, K.; Wang, Z.; Ren, W.; Zhang, J.; Lin, C.W. Multi-scale fusion and decomposition network for single image deraining. IEEE Trans. Image Process. 2023, 33, 191–204. [Google Scholar] [CrossRef]
- Zhang, G.; Fang, W.; Zheng, Y.; Wang, R. SDBAD-Net: A spatial dual-branch attention dehazing network based on meta-former paradigm. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 60–70. [Google Scholar] [CrossRef]
- Fu, X.; Huang, J.; Zeng, D.; Huang, Y.; Ding, X.; Paisley, J. Removing rain from single images via a deep detail network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3855–3863. [Google Scholar]
- Zou, Z.; Yu, H.; Huang, J.; Zhao, F. Freqmamba: Viewing mamba from a frequency perspective for image deraining. In Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne, Australia, 28 October–1 November 2024; pp. 1905–1914. [Google Scholar]
- Yang, L.; Kang, B.; Huang, Z.; Xu, X.; Feng, J.; Zhao, H. Depth anything: Unleashing the power of large-scale unlabeled data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 10371–10381. [Google Scholar]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17683–17693. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
- Chen, X.; Pan, J.; Dong, J. Bidirectional multi-scale implicit neural representations for image deraining. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 25627–25636. [Google Scholar]
- Chen, C.; Li, H. Robust representation learning with feedback for single image deraining. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7742–7751. [Google Scholar]
- Chen, X.; Pan, J.; Jiang, K.; Li, Y.; Huang, Y.; Kong, C.; Dai, L.; Fan, Z. Unpaired Deep Image Deraining Using Dual Contrastive Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2017–2026. [Google Scholar]
- Tao, W.; Yan, X.; Wang, Y.; Wei, M. MFFDNet: Single Image Deraining via Dual-Channel Mixed Feature Fusion. IEEE Trans. Instrum. Meas. 2024, 73, 1–13. [Google Scholar] [CrossRef]
- Jiang, Z.; Yang, S.; Liu, J.; Fan, X.; Liu, R. Multi-scale Synergism Ensemble Progressive and Contrastive Investigation for Image Restoration. IEEE Trans. Instrum. Meas. 2023, 73, 1–14. [Google Scholar]
- Wei, B. DPAFNet: Dual Path Attention Fusion Network for Single Image Deraining. arXiv 2024, arXiv:2401.08185. [Google Scholar]
- He, S.; Lin, G. Gabor-guided transformer for single image deraining. arXiv 2024, arXiv:2403.07380. [Google Scholar]
- Yan, F.; He, Y.; Chen, K.; Cheng, E.; Ma, J. Adaptive Frequency Enhancement Network for Single Image Deraining. arXiv 2024, arXiv:2407.14292. [Google Scholar]
- Gao, N.; Jiang, X.; Zhang, X.; Deng, Y. Efficient Frequency-Domain Image Deraining with Contrastive Regularization. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer: Cham, Switzerland, 2024; pp. 240–257. [Google Scholar]
- Wang, C.; Wang, W.; Yu, C.; Mu, J. Explore Internal and External Similarity for Single Image Deraining with Graph Neural Networks. arXiv 2024, arXiv:2406.00721. [Google Scholar]
- Zhang, H.; Ba, Y.; Yang, E.; Upadhyay, R.; Wong, A.; Kadambi, A.; Guo, Y.; Xiao, X.; Wang, X.; Li, Y.; et al. GT-Rain Single Image Deraining Challenge Report. arXiv 2024, arXiv:2403.12327. [Google Scholar]
- Yu, H.; Huang, J.; Zhao, F.; Gu, J.; Loy, C.C.; Meng, D.; Li, C. Deep fourier up-sampling. Adv. Neural Inf. Process. Syst. 2022, 35, 22995–23008. [Google Scholar]
- Mao, X.; Liu, Y.; Shen, W.; Li, Q.; Wang, Y. Deep residual fourier transformation for single image deblurring. arXiv 2021, arXiv:2111.11745. [Google Scholar]
- Guo, S.; Yong, H.; Zhang, X.; Ma, J.; Zhang, L. Spatial-frequency attention for image denoising. arXiv 2023, arXiv:2302.13598. [Google Scholar]
- Li, C.; Guo, C.L.; Zhou, M.; Liang, Z.; Zhou, S.; Feng, R.; Loy, C.C. Embedding fourier for ultra-high-definition low-light image enhancement. arXiv 2023, arXiv:2302.11831. [Google Scholar]
- Cui, Y.; Tao, Y.; Ren, W.; Knoll, A. Dual-domain attention for image deblurring. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 479–487. [Google Scholar]
- Cho, S.J.; Ji, S.W.; Hong, J.P.; Jung, S.W.; Ko, S.J. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 4641–4650. [Google Scholar]
- Fang, W.; Fan, J.; Zheng, Y.; Weng, J.; Tai, Y.; Li, J. Guided real image dehazing using ycbcr color space. arXiv 2024, arXiv:2412.17496. [Google Scholar] [CrossRef]
- Chen, W.T.; Fang, H.Y.; Hsieh, C.L.; Tsai, C.C.; Chen, I.; Ding, J.J.; Kuo, S.Y. All snow removed: Single image desnowing algorithm using hierarchical dual-tree complex wavelet representation and contradict channel loss. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 4196–4205. [Google Scholar]
- Yang, H.H.; Fu, Y. Wavelet u-net and the chromatic adaptation transform for single image dehazing. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, China, 22–25 September 2019; IEEE: New York, NY, USA, 2019; pp. 2736–2740. [Google Scholar]
- Zou, W.; Jiang, M.; Zhang, Y.; Chen, L.; Lu, Z.; Wu, Y. Sdwnet: A straight dilated network with wavelet transformation for image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1895–1904. [Google Scholar]
- Yang, H.H.; Yang, C.H.H.; Tsai, Y.C.J. Y-net: Multi-scale feature aggregation network with wavelet structure similarity loss function for single image dehazing. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; IEEE: New York, NY, USA, 2020; pp. 2628–2632. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Jiang, H.; Larsson, G.; Shakhnarovich, M.M.G.; Learned-Miller, E. Self-supervised relative depth learning for urban scene understanding. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 19–35. [Google Scholar]
- Chen, P.Y.; Liu, A.H.; Liu, Y.C.; Wang, Y.C.F. Towards scene understanding: Unsupervised monocular depth estimation with semantic-aware representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2624–2632. [Google Scholar]
- Hu, X.; Fu, C.W.; Zhu, L.; Heng, P.A. Depth-attentional features for single-image rain removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8022–8031. [Google Scholar]
- Oquab, M.; Darcet, T.; Moutakanni, T.; Vo, H.; Szafraniec, M.; Khalidov, V.; Fernandez, P.; Haziza, D.; Massa, F.; El-Nouby, A.; et al. Dinov2: Learning robust visual features without supervision. arXiv 2023, arXiv:2304.07193. [Google Scholar]
- Yang, W.; Tan, R.T.; Feng, J.; Liu, J.; Guo, Z.; Yan, S. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1357–1366. [Google Scholar]
- Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Huang, B.; Luo, Y.; Ma, J.; Jiang, J. Multi-scale progressive fusion network for single image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8346–8355. [Google Scholar]
- Fu, X.; Xiao, J.; Zhu, Y.; Liu, A.; Wu, F.; Zha, Z.J. Continual Image Deraining with Hypergraph Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 9534–9551. [Google Scholar] [CrossRef] [PubMed]
- Luo, Y.; Xu, Y.; Ji, H. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3397–3405. [Google Scholar]
- Li, X.; Wu, J.; Lin, Z.; Liu, H.; Zha, H. Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 254–269. [Google Scholar]
- Ren, D.; Zuo, W.; Hu, Q.; Zhu, P.; Meng, D. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3937–3946. [Google Scholar]
- Wang, H.; Xie, Q.; Zhao, Q.; Meng, D. A model-driven deep neural network for single image rain removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3103–3112. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 14821–14831. [Google Scholar]
- Fu, X.; Qi, Q.; Zha, Z.J.; Zhu, Y.; Ding, X. Rain streak removal via dual graph convolutional network. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; pp. 1352–1360. [Google Scholar]
- Yi, Q.; Li, J.; Dai, Q.; Fang, F.; Zhang, G.; Zeng, T. Structure-Preserving Deraining with Residue Channel Prior Guidance. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 4238–4247. [Google Scholar]
- Xiao, J.; Fu, X.; Liu, A.; Wu, F.; Zha, Z.J. Image De-raining Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 12978–12995. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Li, H.; Li, M.; Pan, J. Learning a sparse transformer network for effective image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 5896–5905. [Google Scholar]
Datasets | Rain200L | Rain200H | DID-Data | DDN-Data | SPA-Data | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Metrics | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
Prior-based methods | DSC [51] | 27.16 | 0.8663 | 14.73 | 0.3815 | 24.24 | 0.8279 | 27.31 | 0.8373 | 34.95 | 0.9416 |
GMM [4] | 28.66 | 0.8652 | 14.50 | 0.4164 | 25.81 | 0.8344 | 27.55 | 0.8479 | 34.30 | 0.9428 | |
CNN-based methods | DDN [14] | 34.68 | 0.9671 | 26.05 | 0.8056 | 30.97 | 0.9116 | 30.00 | 0.9041 | 36.16 | 0.9457 |
RESCAN [52] | 36.09 | 0.9697 | 26.75 | 0.8353 | 33.38 | 0.9417 | 31.94 | 0.9345 | 38.11 | 0.9707 | |
PReNet [53] | 37.80 | 0.9814 | 29.04 | 0.8991 | 33.17 | 0.9481 | 32.60 | 0.9459 | 40.16 | 0.9816 | |
MSPFN [49] | 38.58 | 0.9827 | 29.36 | 0.9034 | 33.72 | 0.9550 | 32.99 | 0.9333 | 43.43 | 0.9843 | |
RCDNet [54] | 39.17 | 0.9885 | 30.24 | 0.9048 | 34.08 | 0.9532 | 33.04 | 0.9472 | 43.36 | 0.9831 | |
MPRNet [55] | 39.47 | 0.9825 | 30.67 | 0.9110 | 33.99 | 0.9590 | 33.10 | 0.9347 | 43.64 | 0.9844 | |
DualGCN [56] | 40.73 | 0.9886 | 31.15 | 0.9125 | 34.37 | 0.9620 | 33.01 | 0.9489 | 44.18 | 0.9902 | |
SPDNet [57] | 40.50 | 0.9875 | 31.28 | 0.9207 | 34.57 | 0.9560 | 33.15 | 0.9457 | 43.20 | 0.9871 | |
Transformer-based methods | Uformer [17] | 40.20 | 0.9860 | 30.80 | 0.9105 | 35.02 | 0.9621 | 33.95 | 0.9545 | 46.13 | 0.9913 |
Restormer [18] | 40.99 | 0.9890 | 32.00 | 0.9329 | 35.29 | 0.9641 | 34.20 | 0.9571 | 47.98 | 0.9921 | |
IDT [58] | 40.74 | 0.9884 | 32.10 | 0.9344 | 34.89 | 0.9623 | 33.84 | 0.9549 | 47.35 | 0.9930 | |
DRSformer [59] | 41.23 | 0.9894 | 32.18 | 0.9330 | 35.38 | 0.9647 | 34.36 | 0.9590 | 48.53 | 0.9924 | |
NeRD-Rain-S [19] | 41.30 | 0.9895 | 32.06 | 0.9315 | 35.36 | 0.9647 | 34.25 | 0.9578 | 48.90 | 0.9936 | |
NeRD-Rain [19] | 41.71 | 0.9903 | 32.40 | 0.9373 | 35.53 | 0.9659 | 34.45 | 0.9596 | 49.58 | 0.9940 | |
Ours | 41.75 | 0.9906 | 32.42 | 0.9376 | 35.63 | 0.9703 | 34.49 | 0.9653 | 49.70 | 0.9912 |
Datasets | DID-Data | DDN-Data | ||
---|---|---|---|---|
Metrics | PSNR | SSIM | PSNR | SSIM |
Baseline (w/o FDPM, DBPM) | 34.12 | 0.9532 | 33.54 | 0.9487 |
Baseline + FDPM | 34.85 | 0.9587 | 34.12 | 0.9556 |
Baseline + DBPM | 35.22 | 0.9604 | 34.25 | 0.9563 |
Full DBPNet w/o SFA | 35.48 | 0.9651 | 34.38 | 0.9594 |
Full DBPNet | 35.63 | 0.9703 | 34.49 | 0.9653 |
Datasets | Rain200L | Rain200H | ||
---|---|---|---|---|
Metrics | PSNR | SSIM | PSNR | SSIM |
Full DBPNet (Original) | 27.83 | 0.8967 | 29.52 | 0.9125 |
High-Frequency-Only Mask | 26.85 | 0.8751 | 28.72 | 0.9009 |
Low-Frequency-Only Mask | 27.05 | 0.8773 | 28.46 | 0.9044 |
Datasets | Rain200L | Rain200H | ||
---|---|---|---|---|
Metrics | PSNR | SSIM | PSNR | SSIM |
DINOV2 | 27.60 | 0.8925 | 29.22 | 0.9095 |
No Depth Information | 26.47 | 0.8654 | 28.36 | 0.8913 |
Depth-Anything | 27.83 | 0.8967 | 29.52 | 0.9125 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, M.; Wang, S.; Yin, S. Rain-Resilient Image Restoration for Reliable and Sustainable Visual Monitoring in Industrial Inspection and Quality Control. Processes 2025, 13, 1628. https://doi.org/10.3390/pr13061628
Zhang M, Wang S, Yin S. Rain-Resilient Image Restoration for Reliable and Sustainable Visual Monitoring in Industrial Inspection and Quality Control. Processes. 2025; 13(6):1628. https://doi.org/10.3390/pr13061628
Chicago/Turabian StyleZhang, Miao, Shanqin Wang, and Shiqun Yin. 2025. "Rain-Resilient Image Restoration for Reliable and Sustainable Visual Monitoring in Industrial Inspection and Quality Control" Processes 13, no. 6: 1628. https://doi.org/10.3390/pr13061628
APA StyleZhang, M., Wang, S., & Yin, S. (2025). Rain-Resilient Image Restoration for Reliable and Sustainable Visual Monitoring in Industrial Inspection and Quality Control. Processes, 13(6), 1628. https://doi.org/10.3390/pr13061628