GMUD-Net: Global Modulated Unbalanced Dual-Branch Network for Image Restoration in Various Degraded Environments
Abstract
1. Introduction
- We propose GMUD-Net, an unbalanced dual-branch architecture where the CNN branch performs the primary restoration, and the lightweight transformer branch provides filtered global guidance.
- We design FGAM, which leverages the fast Fourier transform to enhance cross-channel interaction in the frequency domain and improve global restoration capability.
- We introduce GLBB to integrate local detail modeling with global context aggregation, improving robustness under complex degradations.
- Extensive experiments on nine widely used benchmarks demonstrate that GMUD-Net achieves state-of-the-art or highly competitive performance on representative restoration tasks, including defocus deblurring, image dehazing, and image desnowing.
2. Related Work
2.1. Single Image Restoration
2.2. Global Modulation
2.3. Dual-Branch Networks
3. Methodology
3.1. Overall Workflow
3.2. Global-Local Hybrid Backbone Block
3.3. Frequency-Based Global Attention Module
3.4. Global Transformer Block
3.5. Global Attention Guidance Block
3.6. Loss Function
4. Experiments
4.1. Implementation Details and Evaluation Protocols
4.2. Datasets
4.3. Image Dehazing Results
4.4. Defocus Image Deblurring Results
4.5. Image Desnowing Results
4.6. Ablation Study
4.6.1. Effectiveness of Individual Components
4.6.2. Analysis of Dual-Branch Configuration
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.F.; Jaw, D.W.; Huang, S.C.; Hwang, J.N. Desnownet: Context-aware deep network for snow removal. IEEE Trans. Image Process. 2018, 27, 3064–3073. [Google Scholar] [CrossRef] [PubMed]
- Qian, R.; Tan, R.T.; Yang, W.; Su, J.; Liu, J. Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2482–2491. [Google Scholar]
- Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
- Abuolaim, A.; Brown, M.S. Defocus deblurring using dual-pixel data. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 111–126. [Google Scholar]
- Nah, S.; Hyun Kim, T.; Mu Lee, K. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 3883–3891. [Google Scholar]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2023, Paris, France, 1–6 October 2023; pp. 12504–12513. [Google Scholar]
- Zhang, H.; Xiao, L.; Cao, X.; Foroosh, H. Multiple adverse weather conditions adaptation for object detection via causal intervention. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 46, 1742–1756. [Google Scholar] [CrossRef] [PubMed]
- Huang, S.C.; Le, T.H.; Jaw, D.W. DSNet: Joint semantic learning for object detection in inclement weather conditions. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2623–2633. [Google Scholar] [CrossRef]
- Berman, D.; Treibitz, T.; Avidan, S. Air-light estimation using haze-lines. In Proceedings of the 2017 IEEE International Conference on Computational Photography (ICCP); IEEE: New York, NY, USA, 2017; pp. 1–9. [Google Scholar]
- Yi, Q.; Li, J.; Dai, Q.; Fang, F.; Zhang, G.; Zeng, T. Structure-preserving deraining with residue channel prior guidance. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2021, Montreal, QC, Canada, 10–17 October 2021; pp. 4238–4247. [Google Scholar]
- Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision 2017, Venice, Italy, 22–29 October 2017; pp. 4770–4778. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
- McCartney, E.J. Optics of the Atmosphere: Scattering by Molecules and Particles; Wiley: New York, NY, USA, 1976. [Google Scholar]
- Tipping, M.E. Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
- Chen, D.; He, M.; Fan, Q.; Liao, J.; Zhang, L.; Hou, D.; Yuan, L.; Hua, G. Gated context aggregation network for image dehazing and deraining. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV); IEEE: New York, NY, USA, 2019; pp. 1375–1383. [Google Scholar]
- Chen, Z.; He, Z.; Lu, Z.M. DEA-Net: Single image dehazing based on detail-enhanced convolution and content-guided attention. IEEE Trans. Image Process. 2024, 33, 1002–1015. [Google Scholar] [CrossRef]
- Cui, Y.; Ren, W.; Knoll, A. Omni-kernel modulation for universal image restoration. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 12496–12509. [Google Scholar] [CrossRef]
- Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; Gao, W. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021, Nashville, TN, USA, 20–25 June 2021; pp. 12299–12310. [Google Scholar]
- Song, Y.; He, Z.; Qian, H.; Du, X. Vision transformers for single image dehazing. IEEE Trans. Image Process. 2023, 32, 1927–1941. [Google Scholar] [CrossRef]
- Guo, C.L.; Yan, Q.; Anwar, S.; Cong, R.; Ren, W.; Li, C. Image dehazing transformer with transmission-aware 3d position embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 5812–5820. [Google Scholar]
- Liu, H.; Li, X.; Tan, T. Interaction-Guided Two-branch image dehazing network. In Proceedings of the Asian Conference on Computer Vision 2024, Hanoi, Vietnam, 8–12 December 2024; pp. 4069–4084. [Google Scholar]
- Wang, S.; Li, H.; Liu, L.; Cai, R.; Yin, Z.; Zhu, H. TSFI-fusion: A dual-branch decoupled infrared and visible image fusion network based on transformer and spatial-frequency interaction. Opt. Lasers Eng. 2025, 195, 109287. [Google Scholar] [CrossRef]
- Qiu, Y.; Zhang, K.; Wang, C.; Luo, W.; Li, H.; Jin, Z. Mb-taylorformer: Multi-branch efficient transformer expanded by taylor formula for image dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2023, Paris, France, 1–6 October 2023; pp. 12802–12813. [Google Scholar]
- Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef] [PubMed]
- Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [PubMed]
- Dong, H.; Pan, J.; Xiang, L.; Hu, Z.; Zhang, X.; Wang, F.; Yang, M.H. Multi-scale boosted dehazing network with dense feature fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020, Seattle, WA, USA, 13–19 June 2020; pp. 2157–2167. [Google Scholar]
- Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature fusion attention network for single image dehazing. Proc. AAAI Conf. Artif. Intell. 2020, 34, 11908–11915. [Google Scholar] [CrossRef]
- Cui, Y.; Ren, W.; Cao, X.; Knoll, A. Focal network for image restoration. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2023, Paris, France, 1–6 October 2023; pp. 13001–13011. [Google Scholar]
- Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baselines for image restoration. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 17–33. [Google Scholar]
- Ruan, L.; Chen, B.; Li, J.; Lam, M. Learning to deblur using light field generated and real defocus images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 16304–16313. [Google Scholar]
- Quan, Y.; Wu, Z.; Xu, R.; Ji, H. Deep single image defocus deblurring via gaussian kernel mixture learning. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 11361–11377. [Google Scholar] [CrossRef]
- Mao, X.; Liu, Y.; Shen, W.; Li, Q.; Wang, Y. Deep residual fourier transformation for single image deblurring. arXiv 2021, arXiv:2111.11745. [Google Scholar]
- Nussbaumer, H.J. The fast Fourier transform. In Fast Fourier Transform and Convolution Algorithms; Springer: Berlin/Heidelberg, Germany, 1981; pp. 80–111. [Google Scholar]
- Cui, Y.; Tao, Y.; Bing, Z.; Ren, W.; Gao, X.; Cao, X.; Huang, K.; Knoll, A. Selective frequency network for image restoration. In Proceedings of the Eleventh International Conference on Learning Representations 2023, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
- Valanarasu, J.M.J.; Yasarla, R.; Patel, V.M. Transweather: Transformer-based restoration of images degraded by adverse weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 2353–2363. [Google Scholar]
- Cui, Y.; Wang, Q.; Li, C.; Ren, W.; Knoll, A. EENet: An effective and efficient network for single image dehazing. Pattern Recognit. 2025, 158, 111074. [Google Scholar] [CrossRef]
- Zhou, M.; Huang, J.; Guo, C.L.; Li, C. Fourmer: An efficient global modeling paradigm for image restoration. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2023; pp. 42589–42601. [Google Scholar]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 17683–17693. [Google Scholar]
- Yu, Z.; Zhao, C.; Wang, Z.; Qin, Y.; Su, Z.; Li, X.; Zhou, F.; Zhao, G. Searching central difference convolutional networks for face anti-spoofing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020, Seattle, WA, USA, 13–19 June 2020; pp. 5295–5305. [Google Scholar]
- Su, Z.; Liu, W.; Yu, Z.; Hu, D.; Liao, Q.; Tian, Q.; Pietikäinen, M.; Liu, L. Pixel difference networks for efficient edge detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2021, Montreal, QC, Canada, 10–17 October 2021; pp. 5117–5127. [Google Scholar]
- Hendrycks, D.; Gimpel, K. Gaussian error linear units (gelus). arXiv 2016, arXiv:1606.08415. [Google Scholar]
- Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef]
- Ancuti, C.O.; Ancuti, C.; Sbert, M.; Timofte, R. Dense-haze: A benchmark for image dehazing with dense-haze and haze-free images. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP); IEEE: New York, NY, USA, 2019; pp. 1014–1018. [Google Scholar]
- Ancuti, C.O.; Ancuti, C.; Timofte, R. NH-HAZE: An image dehazing benchmark with non-homogeneous hazy and haze-free images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops 2020, Seattle, WA, USA, 14–19 June 2020; pp. 444–445. [Google Scholar]
- Zhang, J.; Cao, Y.; Zha, Z.J.; Tao, D. Nighttime dehazing with a synthetic benchmark. In Proceedings of the 28th ACM International Conference on Multimedia 2020, Seattle, WA, USA, 12–16 October 2020; pp. 2355–2363. [Google Scholar]
- Chen, W.T.; Fang, H.Y.; Ding, J.J.; Tsai, C.C.; Kuo, S.Y. JSTASR: Joint size and transparency-aware snow removal algorithm based on modified partial convolution and veiling effect removal. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 754–770. [Google Scholar]
- Chen, W.T.; Fang, H.Y.; Hsieh, C.L.; Tsai, C.C.; Chen, I.; Ding, J.J.; Kuo, S.Y. All snow removed: Single image desnowing algorithm using hierarchical dual-tree complex wavelet representation and contradict channel loss. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2021, Montreal, QC, Canada, 10–17 October 2021; pp. 4196–4205. [Google Scholar]
- Liu, X.; Ma, Y.; Shi, Z.; Chen, J. Griddehazenet: Attention-based multi-scale network for image dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2019, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7314–7323. [Google Scholar]
- Wu, H.; Qu, Y.; Lin, S.; Zhou, J.; Qiao, R.; Zhang, Z.; Xie, Y.; Ma, L. Contrastive learning for compact single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021, Nashville, TN, USA, 20–25 June 2021; pp. 10551–10560. [Google Scholar]
- Ye, T.; Zhang, Y.; Jiang, M.; Chen, L.; Liu, Y.; Chen, S.; Chen, E. Perceiving and modeling density for image dehazing. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 130–145. [Google Scholar]
- Tu, Z.; Talebi, H.; Zhang, H.; Yang, F.; Milanfar, P.; Bovik, A.; Li, Y. Maxim: Multi-axis mlp for image processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 5769–5780. [Google Scholar]
- Li, Y.; Tan, R.T.; Brown, M.S. Nighttime haze removal with glow and multiple light colors. In Proceedings of the IEEE International Conference on Computer Vision 2015, Santiago, Chile, 7–13 December 2015; pp. 226–234. [Google Scholar]
- Zhang, J.; Cao, Y.; Fang, S.; Kang, Y.; Wen Chen, C. Fast haze removal for nighttime image using maximum reflectance prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 7418–7426. [Google Scholar]
- Wang, T.; Tao, G.; Lu, W.; Zhang, K.; Luo, W.; Zhang, X.; Lu, T. Restoring vision in hazy weather with hierarchical contrastive learning. Pattern Recognit. 2024, 145, 109956. [Google Scholar] [CrossRef]
- Cui, Y.; Ren, W.; Cao, X.; Knoll, A. Image restoration via frequency selection. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 46, 1093–1108. [Google Scholar] [CrossRef]
- Karaali, A.; Jung, C.R. Edge-based defocus blur estimation with adaptive scale selection. IEEE Trans. Image Process. 2017, 27, 1126–1137. [Google Scholar] [CrossRef]
- Lee, J.; Lee, S.; Cho, S.; Lee, S. Deep defocus map estimation using domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019, Long Beach, CA, USA, 15–20 June 2019; pp. 12222–12230. [Google Scholar]
- Shi, J.; Xu, L.; Jia, J. Just noticeable defocus blur detection and estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2015, Boston, MA, USA, 7–12 June 2015; pp. 657–665. [Google Scholar]
- Son, H.; Lee, J.; Cho, S.; Lee, S. Single image defocus deblurring using kernel-sharing parallel atrous convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2021, Montreal, QC, Canada, 10–17 October 2021; pp. 2642–2650. [Google Scholar]
- Lee, J.; Son, H.; Rim, J.; Cho, S.; Lee, S. Iterative filter adaptive network for single image defocus deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021, Nashville, TN, USA, 20–25 June 2021; pp. 2034–2042. [Google Scholar]
- Abuolaim, A.; Afifi, M.; Brown, M.S. Improving single-image defocus deblurring: How dual-pixel images help through multi-task learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2022, Waikoloa, HI, USA, 3–8 January 2022; pp. 1231–1239. [Google Scholar]
- Cui, Y.; Ren, W.; Yang, S.; Cao, X.; Knoll, A. Irnext: Rethinking convolutional network design for image restoration. In Proceedings of the 40th International Conference on Machine Learning 2023, Honolulu, HI, USA, 23–29 July 2023; pp. 6545–6564. [Google Scholar]
- Engin, D.; Genç, A.; Kemal Ekenel, H. Cycle-dehaze: Enhanced cyclegan for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 2018, Salt Lake City, UT, USA, 18–22 June 2018; pp. 825–833. [Google Scholar]
- Li, R.; Tan, R.T.; Cheong, L.F. All in one bad weather removal using architectural search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020, Seattle, WA, USA, 13–19 June 2020; pp. 3175–3185. [Google Scholar]








| Method | SOTS-indoor | SOTS-outdoor | Dense-Haze | NH-Haze | Overhead | Venue | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | Params/M FLOPs/G | |||
| DehazeNet [26] | 19.82 | 0.8210 | 24.75 | 0.9271 | 13.84 | 0.43 | 16.62 | 0.52 | 0.009 | 0.581 | TIP 2016 |
| GridDehazeNet [49] | 32.16 | 0.9845 | 30.86 | 0.9827 | 13.31 | 0.37 | 13.80 | 0.54 | 0.956 | 21.49 | ICCV 2019 |
| MSBDN [27] | 33.67 | 0.9856 | 33.48 | 0.9824 | 15.37 | 0.49 | 19.23 | 0.71 | 31.35 | 24.44 | CVPR 2020 |
| FFA-Net [28] | 36.39 | 0.9894 | 33.57 | 0.9842 | 14.39 | 0.45 | 19.87 | 0.69 | 4.456 | 287.8 | AAAI 2020 |
| AECR-Net [50] | 37.17 | 0.9901 | - | 15.80 | 0.47 | 19.88 | 0.72 | 2.611 | 52.20 | CVPR 2021 | |
| PMNet [51] | 38.41 | 0.9900 | 34.74 | 0.9850 | 16.79 | 0.51 | 20.42 | 0.73 | 18.90 | 81.13 | ECCV 2020 |
| MAXIM-2S [52] | 38.11 | 0.9910 | 34.19 | 0.9850 | - | - | 14.1 | 216 | CVPR 2022 | ||
| Dehamer [21] | 36.63 | 0.9881 | 35.18 | 0.9860 | 16.62 | 0.56 | - | 132.50 | 60.3 | CVPR 2022 | |
| Fourmer [38] | 37.32 | 0.9901 | - | 15.95 | 0.49 | 19.91 | 0.72 | 1.29 | 20.6 | ICML 2023 | |
| Dehazeformer [20] | 38.46 | 0.9940 | 34.29 | 0.9830 | - | 19.11 | 0.66 | 4.634 | 48.64 | TIP 2023 | |
| MB-TaylorFormer-B [24] | 40.71 | 0.9920 | 37.42 | 0.9890 | 16.66 | 0.56 | - | 2.68 | 38.5 | ICCV 2023 | |
| DEA-Net-CR [17] | 41.31 | 0.9945 | 36.59 | 0.9897 | - | - | 3.653 | 32.23 | TIP 2024 | ||
| Restormer [13] | 38.88 | 0.9910 | - | 15.78 | 0.55 | - | 25.31 | 87.7 | CVPR 2022 | ||
| FocalNet [29] | 40.82 | 0.9960 | 37.71 | 0.9950 | 17.07 | 0.63 | 20.43 | 0.79 | 3.74 | 30.63 | ICCV 2023 |
| OKM [18] | 40.79 | 0.9960 | 37.68 | 0.9950 | 16.92 | 0.64 | 20.48 | 0.80 | 4.72 | 39.67 | AAAI 2024 |
| Ours | 41.70 | 0.9966 | 38.57 | 0.9953 | 17.13 | 0.64 | 20.47 | 0.79 | 6.28 | 37.15 | |
| Method | GS [53] | MRPF [54] | MRP [54] | OSFD [46] | HCD [55] | FSNet-S [56] | Restormer [13] | FocalNet [29] | Ours |
|---|---|---|---|---|---|---|---|---|---|
| PSNR | 17.32 | 16.95 | 19.93 | 21.32 | 23.43 | 24.35 | 25.01 | 25.35 | 25.95 |
| SSIM | 0.629 | 0.667 | 0.777 | 0.804 | 0.953 | 0.965 | 0.967 | 0.969 | 0.973 |
| Method | Indoor Scenes | Outdoor Scenes | Combined | Params | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| PSNR↑ | SSIM↑ | MAE↓ | LPIPS↓ | PSNR↑ | SSIM↑ | MAE↓ | LPIPS↓ | PSNR↑ | SSIM↑ | MAE↓ | LPIPS↓ | (M) | |
| EBDB [57] | 25.77 | 0.772 | 0.040 | 0.297 | 21.25 | 0.599 | 0.058 | 0.373 | 23.45 | 0.683 | 0.049 | 0.336 | - |
| DMENet [58] | 25.50 | 0.788 | 0.038 | 0.298 | 21.43 | 0.644 | 0.063 | 0.397 | 23.41 | 0.714 | 0.051 | 0.349 | - |
| JNB [59] | 26.73 | 0.828 | 0.031 | 0.273 | 21.10 | 0.608 | 0.064 | 0.355 | 23.84 | 0.715 | 0.048 | 0.315 | - |
| DPDNet [5] | 26.54 | 0.816 | 0.031 | 0.239 | 22.25 | 0.682 | 0.056 | 0.313 | 24.34 | 0.747 | 0.044 | 0.277 | 31.03 |
| KPAC [60] | 27.97 | 0.852 | 0.026 | 0.182 | 22.62 | 0.701 | 0.053 | 0.269 | 25.22 | 0.774 | 0.040 | 0.227 | 2.06 |
| DeepRFT [33] | - | - | 25.71 | 0.801 | 0.039 | 0.218 | 9.60 | ||||||
| IFAN [61] | 28.11 | 0.861 | 0.026 | 0.179 | 22.76 | 0.720 | 0.052 | 0.254 | 25.37 | 0.789 | 0.039 | 0.217 | 10.48 |
| MDP [62] | 28.02 | 0.841 | 0.027 | - | 22.82 | 0.690 | 0.052 | - | 25.35 | 0.763 | 0.040 | 0.303 | 46.86 |
| DRBNet [31] | - | - | 25.73 | 0.791 | - | 0.183 | 11.69 | ||||||
| Restormer [13] | 28.87 | 0.882 | 0.025 | 0.145 | 23.24 | 0.743 | 0.050 | 0.209 | 25.98 | 0.811 | 0.038 | 0.178 | 26.16 |
| FocalNet [29] | 29.10 | 0.876 | 0.024 | 0.173 | 23.41 | 0.743 | 0.049 | 0.246 | 26.18 | 0.808 | 0.037 | 0.210 | 12.82 |
| Ours | 28.89 | 0.878 | 0.024 | 0.149 | 23.25 | 0.741 | 0.049 | 0.212 | 25.99 | 0.808 | 0.037 | 0.182 | 15.58 |
| Method | CSD | SRRS | Snow100K | |||
|---|---|---|---|---|---|---|
| PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
| DesnowNet [2] | 20.13 | 0.81 | 20.38 | 0.84 | 30.50 | 0.94 |
| CycleGAN [64] | 20.98 | 0.80 | 20.21 | 0.74 | 26.81 | 0.89 |
| All in One [65] | 26.31 | 0.87 | 24.98 | 0.88 | 26.07 | 0.88 |
| JSTASR [47] | 27.96 | 0.88 | 25.82 | 0.89 | 23.12 | 0.86 |
| HDCW-Net [48] | 29.06 | 0.91 | 27.78 | 0.92 | 31.54 | 0.95 |
| TransWeather [36] | 31.76 | 0.93 | 28.29 | 0.92 | 31.82 | 0.93 |
| NAFNet [30] | 33.13 | 0.96 | 29.72 | 0.94 | 32.41 | 0.95 |
| Restormer [13] | 37.07 | 0.99 | 31.12 | 0.97 | 33.51 | 0.95 |
| FocalNet [29] | 37.18 | 0.99 | 31.34 | 0.98 | 33.53 | 0.95 |
| IRNeXt [63] | 37.29 | 0.99 | 31.91 | 0.98 | 33.61 | 0.95 |
| Ours | 37.68 | 0.99 | 32.25 | 0.98 | 33.80 | 0.96 |
| Methods | FA | FGAM | GLBB | GAG | PSNR | SSIM | Params/M | FLOPs/G |
|---|---|---|---|---|---|---|---|---|
| M1 | ✓ | 31.98 | 0.9852 | 2.01 | 17.47 | |||
| M2 | ✓ | ✓ | 33.26 | 0.9883 | 2.09 | 17.50 | ||
| M3 | ✓ | ✓ | 38.95 | 0.9945 | 3.92 | 21.22 | ||
| M4 | ✓ | ✓ | ✓ | 39.20 | 0.9947 | 4.00 | 21.49 | |
| M5 | ✓ | ✓ | ✓ | 39.31 | 0.9947 | 4.01 | 21.52 |
| Methods | N | B | PSNR | SSIM | Params/M | FLOPs/G |
|---|---|---|---|---|---|---|
| F1 | 1 | [0, 0, 0, 0, 0] | 39.06 | 0.9946 | 3.47 | 19.47 |
| F2 | 1 | [1, 1, 1, 1, 1] | 39.31 | 0.9947 | 4.01 | 21.52 |
| F3 | 1 | [1, 2, 4, 2, 1] | 39.41 | 0.9947 | 4.95 | 23.85 |
| F4 | 1 | [8, 8, 8, 8, 8] | 39.44 | 0.9947 | 6.41 | 29.47 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Wang, S.; Liu, Y.; Zhu, H. GMUD-Net: Global Modulated Unbalanced Dual-Branch Network for Image Restoration in Various Degraded Environments. Appl. Sci. 2026, 16, 2854. https://doi.org/10.3390/app16062854
Wang S, Liu Y, Zhu H. GMUD-Net: Global Modulated Unbalanced Dual-Branch Network for Image Restoration in Various Degraded Environments. Applied Sciences. 2026; 16(6):2854. https://doi.org/10.3390/app16062854
Chicago/Turabian StyleWang, Shengchun, Yingjie Liu, and Huijie Zhu. 2026. "GMUD-Net: Global Modulated Unbalanced Dual-Branch Network for Image Restoration in Various Degraded Environments" Applied Sciences 16, no. 6: 2854. https://doi.org/10.3390/app16062854
APA StyleWang, S., Liu, Y., & Zhu, H. (2026). GMUD-Net: Global Modulated Unbalanced Dual-Branch Network for Image Restoration in Various Degraded Environments. Applied Sciences, 16(6), 2854. https://doi.org/10.3390/app16062854

