Symmetry-Guided AB-Dynamic Feature Refinement Network for Weakly Supervised Shadow Removal
Abstract
1. Introduction
2. Related Work
2.1. GAN-Based Shadow Removal
| Method | Supervision Level | Shadow Image | Shadow-Free GT | Shadow Mask | Color Space |
|---|---|---|---|---|---|
| ST-CGAN [23] | Fully Supervised | Yes | Yes | Yes | RGB |
| AngularGAN [24] | Fully Supervised | Yes | Yes | Yes | RGB |
| Mask-ShadowGAN [25] | Unsupervised | Yes | Yes (unpaired) | No (learned) | RGB |
| LG-ShadowNet [27] | Unsupervised | Yes | Yes (unpaired) | No (learned) | Lab |
| Le et al. [21] | Weakly Supervised | Yes | No | Yes | RGB |
| G2R-ShadowNet [22] | Weakly Supervised | Yes | No | Yes (GT/pred.) | Lab |
| HQSS [28] | Weakly Supervised | Yes | No | Yes (GT/pred.) | Lab |
| Ours | Weakly Supervised | Yes | No | Yes (GT/pred.) | Lab |
2.2. Feature Fusion
2.3. Auxiliary Loss Functions for Shadow Removal
3. Method
3.1. High-Frequency Information Enhancer Module
3.2. Dual-Attention Adaptive Fusion Module
3.3. Color Consistency Loss
3.4. Loss Function
4. Experiment
4.1. Datasets
4.2. Evaluation Metrics
4.3. Implementation Details
4.4. Comparison with The-State-of-the-Arts
4.5. Ablation Study
4.5.1. Ablation on Modules and Loss
4.5.2. Ablation Study on DAAF Module
4.5.3. Ablation Study on HFIE Module
4.5.4. Ablation Study on Chrominance-Only Consistency Loss
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Sarkar, S.; Purkayastha, K.; Palaiahnakote, S.; Pal, U.; Saleem, M.H.; Ghosal, P. A New Multimodal Cross-Domain Network for Classification of Challenging Scene Images. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR) 2025 Workshops, Wuhan, China, 20–21 September 2025; pp. 108–123. [Google Scholar] [CrossRef]
- Nadimi, S.; Bhanu, B. Physical models for moving shadow and object detection in video. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1079–1087. [Google Scholar] [CrossRef] [PubMed]
- Agrawal, S.; Natu, P. ABGS Segmenter: Pixel wise adaptive background subtraction and intensity ratio based shadow removal approach for moving object detection. J. Supercomput. 2023, 79, 7937–7969. [Google Scholar] [CrossRef]
- Suh, H.K.; Hofstee, J.W.; Van Henten, E.J. Improved vegetation segmentation with ground shadow removal using an HDR camera. Precis. Agric. 2018, 19, 218–237. [Google Scholar] [CrossRef]
- Zhang, W.; Zhao, X.; Morvan, J.-M.; Chen, L. Improving shadow suppression for illumination robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 611–624. [Google Scholar] [CrossRef]
- Liu, Y.; Hou, A.Z.; Huang, X.; Ren, L.; Liu, X. Blind Removal of Facial Foreign Shadows. In Proceedings of the British Machine Vision Conference (BMVC), London, UK, 21–24 November 2022; Available online: https://api.semanticscholar.org/CorpusID:253820934 (accessed on 1 December 2025).
- Sanin, A.; Sanderson, C.; Lovell, B.C. Improved Shadow Removal for Robust Person Tracking in Surveillance Scenarios. In Proceedings of the 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 141–144. [Google Scholar] [CrossRef]
- Pradhan, P.K.; Purkayastha, K.; Sharma, A.L.; Baruah, U.; Sen, B.; Ghosal, P. Graphically Residual Attentive Network for tackling aerial image occlusion. Comput. Electr. Eng. 2025, 125, 110429. [Google Scholar] [CrossRef]
- Finlayson, G.D.; Hordley, S.D.; Lu, C.; Drew, M.S. On the removal of shadows from images. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 28, 59–68. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, Q.; Xiao, C. Shadow remover: Image shadow removal based on illumination recovering optimization. IEEE Trans. Image Process. 2015, 24, 4623–4636. [Google Scholar] [CrossRef]
- Liu, F.; Gleicher, M. Texture-Consistent Shadow Removal. In Proceedings of the 10th European Conference on Computer Vision (ECCV), Marseille, France, 12–18 October 2008; pp. 437–450. [Google Scholar] [CrossRef]
- Guo, R.; Dai, Q.; Hoiem, D. Single-Image Shadow Detection and Removal Using Paired Regions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 2033–2040. [Google Scholar] [CrossRef]
- Liu, J.; Wang, Q.; Fan, H.; Tian, J.; Tang, Y. A Shadow Imaging Bilinear Model and Three-Branch Residual Network for Shadow Removal. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 15857–15871. [Google Scholar] [CrossRef]
- Zhang, X.; Zhao, Y.; Gu, C.; Lu, C.; Zhu, S. SpA-Former: An Effective and Lightweight Transformer for Image Shadow Removal. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Vasluianu, F.-A.; Seizinger, T.; Timofte, R. WSRD: A Novel Benchmark for High Resolution Image Shadow Removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 17–24 June 2023; pp. 1826–1835. [Google Scholar] [CrossRef]
- Wang, Y.; Zhou, W.; Feng, H.; Li, L.; Li, H. Progressive Recurrent Network for Shadow Removal. Comput. Vis. Image Underst. 2024, 238, 103861. [Google Scholar] [CrossRef]
- Jin, Y.; Ye, W.; Yang, W.; Yuan, Y.; Tan, R.T. Des3: Adaptive Attention-Driven Self and Soft Shadow Removal Using ViT Similarity. AAAI Conf. Artif. Intell. 2024, 38, 2634–2642. [Google Scholar] [CrossRef]
- Xiao, J.; Fu, X.; Zhu, Y.; Li, D.; Huang, J.; Zhu, K.; Zha, Z.-J. HomoFormer: Homogenized Transformer for Image Shadow Removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 25617–25626. [Google Scholar] [CrossRef]
- Guo, L.; Huang, S.; Liu, D.; Cheng, H.; Wen, B. ShadowFormer: Global Context Helps Shadow Removal. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 710–718. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 8–13 December 2014; MIT Press: Cambridge, MA, USA, 2014; Volume 27, pp. 2672–2680. [Google Scholar]
- Le, H.; Samaras, D. From Shadow Segmentation to Shadow Removal. In Proceedings of the 16th European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 264–281. [Google Scholar] [CrossRef]
- Liu, Z.; Yin, H.; Wu, X.; Wu, Z.; Mi, Y.; Wang, S. From Shadow Generation to Shadow Removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 4925–4934. [Google Scholar] [CrossRef]
- Wang, J.; Li, X.; Yang, J. Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 1788–1797. [Google Scholar] [CrossRef]
- Sidorov, O. Conditional GANs for Multi-Illuminant Color Constancy: Revolution or Yet Another Approach? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 1748–1758. [Google Scholar] [CrossRef]
- Hu, X.; Jiang, Y.; Fu, C.-W.; Heng, P.-A. Mask-ShadowGAN: Learning to Remove Shadows From Unpaired Data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2472–2481. [Google Scholar] [CrossRef]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar] [CrossRef]
- Liu, Z.; Yin, H.; Mi, Y.; Pu, M.; Wang, S. Shadow Removal by a Lightness-Guided Network With Training on Unpaired Data. IEEE Trans. Image Process. 2021, 30, 1853–1865. [Google Scholar] [CrossRef] [PubMed]
- Zhong, Y.; You, L.; Zhang, Y.; Chao, F.; Tian, Y.; Ji, R. Shadow Removal by High-Quality Shadow Synthesis. arXiv 2022, arXiv:2212.04108. [Google Scholar]
- Hu, X.; Fu, C.-W.; Zhu, L.; Qin, J.; Heng, P.-A. Direction-Aware Spatial Context Features for Shadow Detection and Removal. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2795–2808. [Google Scholar] [CrossRef] [PubMed]
- Zhao, X.; Wang, Z.; Deng, Z.; Qin, H.; Zhu, Z. Transmission-guided multi-feature fusion Dehaze network. Vis. Comput. 2025, 41, 2285–2297. [Google Scholar] [CrossRef]
- Yi, W.; Dong, L.; Liu, M.; Hui, M.; Kong, L.; Zhao, Y. MFAF-Net: Image dehazing with multi-level features and adaptive fusion. Vis. Comput. 2024, 40, 2293–2307. [Google Scholar] [CrossRef]
- Yang, J.; Qiu, P.; Zhang, Y.; Marcus, D.S.; Sotiras, A. D-net: Dynamic Large Kernel with Dynamic Feature Fusion for Volumetric Medical Image Segmentation. Biomed. Signal Process. Control 2026, 113, 108837. [Google Scholar] [CrossRef]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. Image Style Transfer Using Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2414–2423. [Google Scholar] [CrossRef]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 694–711. [Google Scholar] [CrossRef]
- Jin, Y.; Sharma, A.; Tan, R.T. DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised Domain-Classifier Guided Network. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 5007–5016. [Google Scholar] [CrossRef]
- Cun, X.; Pun, C.-M.; Shi, C. Towards Ghost-Free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 10680–10687. [Google Scholar] [CrossRef]
- Vasluianu, F.-A.; Romero, A.; Van Gool, L.; Timofte, R. Shadow Removal with Paired and Unpaired Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 826–835. [Google Scholar] [CrossRef]
- Wan, J.; Yin, H.; Wu, Z.; Wu, X.; Liu, Y.; Wang, S. Style-Guided Shadow Removal. In Proceedings of the 17th European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; pp. 361–378. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar] [CrossRef]
- Chen, G.; Dai, K.; Yang, K.; Hu, T.; Chen, X.; Yang, Y.; Dong, W.; Wu, P.; Zhang, Y.; Yan, Q. Bracketing Image Restoration and Enhancement with High-Low Frequency Decomposition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 17–18 June 2024; pp. 6097–6107. [Google Scholar] [CrossRef]
- Zhao, H.; Kong, X.; He, J.; Qiao, Y.; Dong, C. Efficient Image Super-Resolution Using Pixel Attention. In Proceedings of the European Conference on Computer Vision Workshops (ECCVW), Glasgow, UK, 23–28 August 2020; pp. 56–72. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Le, H.; Samaras, D. Shadow Removal via Shadow Image Decomposition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8577–8586. [Google Scholar] [CrossRef]
- Zhu, L.; Deng, Z.; Hu, X.; Fu, C.-W.; Xu, X.; Qin, J.; Heng, P.-A. Bidirectional Feature Pyramid Network with Recurrent Attention Residual Modules for Shadow Detection. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 122–137. [Google Scholar] [CrossRef]
- Qu, L.; Tian, J.; He, S.; Tang, Y.; Lau, R.W.H. DeshadowNet: A Multi-Context Embedding Deep Network for Shadow Removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2308–2316. [Google Scholar] [CrossRef]
- Liu, Y.; Ke, Z.; Xu, K.; Liu, F.; Wang, Z.; Lau, R.W. Recasting Regional Lighting for Shadow Removal. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, Canada, 20–27 February 2024; pp. 3810–3818. [Google Scholar] [CrossRef]
- Hu, X.; Wang, T.; Fu, C.-W.; Jiang, Y.; Wang, Q.; Heng, P.-A. Revisiting shadow detection: A new benchmark dataset for complex world. IEEE Trans. Image Process. 2021, 30, 1925–1934. [Google Scholar] [CrossRef]
- Guo, L.; Wang, C.; Yang, W.; Wang, Y.; Wen, B. Boundary-Aware Divide and Conquer: A Diffusion-Based Solution for Unsupervised Shadow Removal. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 12999–13008. [Google Scholar] [CrossRef]
- Wang, D.; Wang, J.; He, N.; Zhang, J.; Zhang, S.; Liu, S. Enhancing Unsupervised Shadow Removal via Multi-Intensity Shadow Generation and Diffusion Modeling. Vis. Comput. 2025, 41, 5461–5476. [Google Scholar] [CrossRef]
- Yang, Q.; Tan, K.-H.; Ahuja, N. Shadow Removal Using Bilateral Filtering. IEEE Trans. Image Process. 2012, 21, 4361–4368. [Google Scholar] [CrossRef]
- Gong, H.; Cosker, D. Interactive Shadow Removal and Ground Truth for Variable Scene Categories. In Proceedings of the British Machine Vision Conference (BMVC), Nottingham, UK, 1–5 September 2014. [Google Scholar] [CrossRef]
- Huang, Y.; Lu, X.; Quan, Y.; Xu, Y.; Ji, H. Image Shadow Removal via Multi-Scale Deep Retinex Decomposition. Pattern Recognition 2025, 159, 111126. [Google Scholar] [CrossRef]
- Vicente, T.F.Y.; Hou, L.; Yu, C.P.; Hoai, M.; Samaras, D. Large-Scale Training of Shadow Detectors with Noisily-Annotated Shadow Examples. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 816–832. [Google Scholar] [CrossRef]
- Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index. IEEE Trans. Image Process. 2014, 23, 684–695. [Google Scholar] [CrossRef]















| Model (Subnet) | Operator | Added Module | Input | Output | Training | Testing |
|---|---|---|---|---|---|---|
| GS subnet | Conv 7 × 7, Down × 2 ResBlocks × 9 Up × 2, Conv 7 × 7 | HFIE | non-shadow region | Pseudo-shadow image | √ | × |
| D subnet | PatchGAN | – | Real shadow region/pseudo-shadow | Patch-wise realism map | √ | × |
| SR subnet | Conv 7 × 7, Down × 2 ResBlocks × 9 Up × 2, Conv 7 × 7 | DAAF | Pseudo-shadow image | Coarse shadow-free image | √ | √ |
| R subnet | Conv 7 × 7, Down × 2 ResBlocks × 9 Up × 2, Conv 7 × 7 | – | Coarse output, shadow mask | Refined shadow-free result | √ | √ |
| Method | Data | Shadow Region | Non-Shadow Region | All | LPIPS ↓ | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | |||
| Yang et al. [50] * | - | 23.2 | 21.57 | 0.878 | 14.2 | 22.25 | 0.782 | 15.9 | 20.26 | 0.706 | - |
| Gong et al. [51] * | - | 13.0 | 30.53 | 0.972 | 2.6 | 36.63 | 0.982 | 4.3 | 28.96 | 0.943 | - |
| DHAN [36] | Paired + Mask | 9.6 | 32.92 | 0.988 | 7.4 | 27.15 | 0.971 | 7.8 | 25.66 | 0.956 | 0.0831 |
| SG-ShadowNet [38] | Paired + Mask | 6.0 | 37.60 | 0.990 | 2.4 | 37.42 | 0.985 | 3.0 | 33.90 | 0.972 | 0.0893 |
| TBRNet [13] | Paired | 6.5 | 36.34 | 0.991 | 3.3 | 35.57 | 0.977 | 3.8 | 31.91 | 0.964 | - |
| DeS3 [17] | Paired | 6.5 | 36.49 | 0.989 | 3.3 | 34.72 | 0.972 | 3.9 | 31.39 | 0.957 | - |
| MSRDNet [52] | Paired | 5.5 | 38.93 | 0.991 | 2.4 | 38.49 | 0.985 | 2.9 | 34.94 | 0.972 | - |
| Mask-ShadowGAN [25] | Unpaired Images | 10.8 | 32.19 | 0.984 | 3.8 | 33.44 | 0.974 | 4.8 | 28.81 | 0.946 | - |
| DC-ShadowNet [35] | Unpaired Images | 10.9 | 32.00 | 0.976 | 3.6 | 33.56 | 0.968 | 4.7 | 28.77 | 0.932 | - |
| LG-ShadowNet [27] | Unpaired Images | 9.9 | 32.45 | 0.982 | 3.2 | 33.73 | 0.975 | 4.3 | 29.22 | 0.947 | 0.125 |
| Wang et al. [49] * | Unpaired Images | 7.3 | 35.56 | 0.987 | 2.4 | 36.71 | 0.983 | 3.2 | 32.48 | 0.958 | - |
| FSS2SR (detect) [21] | Shadow + Mask | 10.4 | 33.09 | 0.983 | 2.8 | 35.35 | 0.978 | 3.9 | 30.15 | 0.951 | 0.101 |
| G2R-ShadowNet (detect) [22] | Shadow + Mask | 8.9 | 33.58 | 0.978 | 2.9 | 35.52 | 0.976 | 3.9 | 30.52 | 0.944 | 0.114 |
| G2R-ShadowNet | Shadow + Mask | 8.6 | 33.98 | 0.978 | 2.4 | 37.41 | 0.985 | 3.4 | 31.81 | 0.953 | 0.109 |
| HQSS (detect) [28] | Shadow + Mask | 8.5 | 33.95 | 0.980 | 2.8 | 35.59 | 0.978 | 3.7 | 30.76 | 0.948 | 0.114 |
| HQSS | Shadow + Mask | 8.2 | 34.52 | 0.980 | 2.4 | 37.41 | 0.985 | 3.4 | 32.08 | 0.956 | 0.108 |
| BCDiff [48] * | Shadow + Mask | 7.6 | 35.91 | 0.986 | 2.4 | 37.27 | 0.984 | 3.3 | 32.73 | 0.962 | - |
| Ours (detect) | Shadow + Mask | 8.4 | 34.53 | 0.981 | 2.8 | 35.61 | 0.977 | 3.7 | 31.12 | 0.950 | 0.110 |
| Ours | Shadow + Mask | 7.9 | 35.18 | 0.982 | 2.4 | 37.41 | 0.985 | 3.3 | 32.56 | 0.958 | 0.104 |
| Method | Data | Shadow Region | Non-Shadow Region | All | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | ||
| Mask-ShadowGAN [25] † | Unpaired Images | 10.8 | 29.97 | 0.964 | 4.9 | 32.84 | 0.976 | 6.6 | 27.56 | 0.933 |
| LG-ShadowNet [27] † | Unpaired Images | 11.9 | 29.26 | 0.955 | 4.6 | 31.85 | 0.976 | 7.0 | 26.56 | 0.924 |
| DC-ShadowNet [35] † | Unpaired Images | 8.7 | 32.48 | 0.967 | 4.3 | 34.21 | 0.978 | 5.5 | 29.72 | 0.938 |
| G2R-ShadowNet [22] † | Shadow + Mask | 15.8 | 26.7 | 0.940 | 4.5 | 31.63 | 0.981 | 7.8 | 24.94 | 0.902 |
| HQSS [28] | Shadow + Mask | 15.9 | 26.32 | 0.941 | 4.6 | 31.47 | 0.981 | 7.7 | 24.68 | 0.904 |
| Ours | Shadow + Mask | 13.9 | 27.69 | 0.953 | 4.6 | 31.61 | 0.981 | 7.0 | 25.72 | 0.914 |
| Ours (GT) | Shadow + Mask | 13.0 | 28.59 | 0.950 | 4.1 | 33.57 | 0.983 | 6.7 | 26.99 | 0.924 |
| Method | Data | RMSE | PSNR | SSIM | |
|---|---|---|---|---|---|
| SP+M-Net [43] | Paired + Mask | - | 22.2 | - | - |
| Mask-ShadowGAN [25] | Unpaired Images | 22.7 | 19.6 | 20.38 | 0.887 |
| LG-ShadowNet [27] | Unpaired Images | 22.0 | 18.3 | 20.68 | 0.880 |
| FSS2SR [21] | Shadow + Mask | - | 20.9 | - | - |
| G2R-ShadowNet [22] | Shadow + Mask | 21.8 | 18.8 | 21.07 | 0.882 |
| HQSS [28] | Shadow + Mask | 18.95 | 16.82 | 21.89 | 0.888 |
| BCDiff [48] | Shadow + Mask | - | 17.7 | 22.23 | 0.893 |
| Ours | Shadow + Mask | 18.3 | 16.1 | 22.03 | 0.897 |
| Method | SR Subnet Params (M) | R Subnet Params (M) | Total Params (M) | SR Subnet MACs (G) | R Subnet MACs (G) | Total MACs (G) | Inference Time (ms) |
|---|---|---|---|---|---|---|---|
| DC-ShadowNet | - | - | 10.59 | - | - | 246.48 | 35.34 |
| SG-ShadowNet | 2.04 | 4.13 | 6.17 | 83.21 | 85.86 | 169.08 | 56.09 |
| Mask-ShadowGAN | - | - | 11.38 | - | - | 266.77 | 33.82 |
| LG-ShadowNet | - | - | 5.70 | - | - | 67.37 | 24.50 |
| G2R-ShadowNet | 11.38 | 11.38 | 22.76 | 266.77 | 267.73 | 534.5 | 76.18 |
| HQSS | 11.38 | 11.38 | 22.76 | 232.80 | 233.76 | 534.5 | 76.15 |
| Ours | 11.90 | 11.38 | 23.28 | 276.98 | 267.52 | 544.49 | 79.40 |
| Method | Shadow-Region | Edge-Region | ||||||
|---|---|---|---|---|---|---|---|---|
| Grad-L1 ↓ | GMSD ↓ | GMS-Mean ↑ | LapEnergyDiff ↓ | Grad-L1 ↓ | GMSD ↓ | GMS-Mean ↑ | LapEnergyDiff ↓ | |
| LG-ShadowNet | 0.02575 | 0.11834 | 0.91349 | 0.02819 | 0.03600 | 0.15935 | 0.85856 | 0.02139 |
| HQSS | 0.02806 | 0.12690 | 0.89987 | 0.03214 | 0.03728 | 0.16794 | 0.84325 | 0.02448 |
| G2R-ShadowNet | 0.02745 | 0.12684 | 0.90330 | 0.02916 | 0.03772 | 0.17205 | 0.84095 | 0.02349 |
| Ours | 0.02569 | 0.11969 | 0.91356 | 0.02943 | 0.03628 | 0.16650 | 0.85000 | 0.02439 |
| Method | PSNR ↑ Mean [95% CI] | SSIM ↑ Mean [95% CI] | Gain PSNR [95% CI] | p (PSNR, t / w) | Gain SSIM [95% CI] | p (SSIM, t / w) |
|---|---|---|---|---|---|---|
| Ours | 35.18 [34.72, 35.64] | 0.9817 [0.9804, 0.9830] | – | – | – | – |
| LG-ShadowNet | 32.45 [32.01, 32.89] | 0.9820 [0.9807, 0.9832] | +2.73 [2.39, 3.07] | −0.0002 [−0.0009, 0.0005] | ||
| G2R-ShadowNet | 33.98 [33.53, 34.42] | 0.9779 [0.9764, 0.9793] | +1.20 [1.06, 1.35] | +0.0038 [0.0033, 0.0044] | ||
| HQSS | 34.52 [34.06, 34.97] | 0.9804 [0.9790, 0.9818] | +0.66 [0.49, 0.83] | +0.0013 [0.0008, 0.0018] |
| Method | DAAF | HFIE | COC Loss | Shadow Region | All | ||||
|---|---|---|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | ||||
| Baseline | × | × | × | 8.6 | 33.98 | 0.978 | 3.4 | 31.81 | 0.953 |
| Ours | √ | × | × | 8.4 | 34.37 | 0.978 | 3.3 | 32.07 | 0.954 |
| Ours | √ | √ | × | 8.1 | 34.90 | 0.982 | 3.3 | 32.40 | 0.958 |
| Ours | √ | × | √ | 8.5 | 33.99 | 0.978 | 3.4 | 31.80 | 0.953 |
| Ours | × | √ | √ | 8.3 | 34.63 | 0.981 | 3.4 | 32.24 | 0.957 |
| Ours | √ | √ | √ | 7.9 | 35.18 | 0.982 | 3.3 | 32.56 | 0.958 |
| Method | DAAF | HFIE | COC Loss | Shadow Region | All | ||||
|---|---|---|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | ||||
| Baseline | × | × | × | 14.2 | 27.89 | 0.936 | 7.2 | 26.43 | 0.910 |
| Ours | √ | √ | √ | 13.0 | 28.59 | 0.950 | 6.7 | 26.99 | 0.924 |
| Ours | √ | √ | × | 13.6 | 28.16 | 0.943 | 6.9 | 26.65 | 0.917 |
| Ours | √ | × | √ | 14.5 | 27.62 | 0.933 | 7.3 | 26.23 | 0.906 |
| Ours | × | √ | √ | 15.4 | 27.87 | 0.928 | 7.7 | 26.41 | 0.902 |
| Method | Grad-L1 ↓ | GMSD ↓ | GMS-Mean ↑ |
|---|---|---|---|
| w/o DAAF | 0.02661 | 0.12346 | 0.90850 |
| Ours | 0.02569 | 0.11969 | 0.91356 |
| Method | Shadow Region | All | ||||
|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | |
| w/o pa | 8.0 | 34.96 | 0.981 | 3.3 | 32.42 | 0.957 |
| w/o ca | 8.0 | 34.92 | 0.980 | 3.3 | 32.40 | 0.956 |
| w/o att | 8.2 | 34.76 | 0.982 | 3.3 | 32.27 | 0.957 |
| Complete model | 7.9 | 35.18 | 0.982 | 3.3 | 32.56 | 0.958 |
| Method | GS Subnet | SR Subnet | Shadow Region | All | ||||
|---|---|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | |||
| Ours | √ | √ | 8.5 | 34.43 | 0.979 | 3.4 | 32.04 | 0.955 |
| Ours | × | √ | 9.1 | 33.69 | 0.977 | 3.5 | 31.57 | 0.951 |
| Ours | √ | × | 7.9 | 35.18 | 0.982 | 3.3 | 32.56 | 0.958 |
| Number of Convolutional Layers | Shadow Region | All | ||||
|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | |
| 1 | 8.3 | 34.67 | 0.980 | 3.3 | 32.25 | 0.956 |
| 2 | 8.2 | 34.40 | 0.980 | 3.4 | 32.02 | 0.956 |
| 3 | 8.5 | 34.57 | 0.979 | 3.4 | 32.17 | 0.955 |
| 4 | 8.0 | 34.95 | 0.982 | 3.3 | 32.40 | 0.958 |
| 5 | 7.9 | 34.93 | 0.979 | 3.3 | 32.42 | 0.956 |
| 6 | 7.9 | 35.18 | 0.982 | 3.3 | 32.56 | 0.958 |
| 7 | 8.4 | 34.94 | 0.980 | 3.4 | 32.41 | 0.956 |
| 8 | 8.9 | 34.20 | 0.980 | 3.5 | 31.82 | 0.956 |
| Method | Shadow Region | All | ||||
|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | |
| HFIE (HF+LF fusion) | 8.4 | 34.76 | 0.981 | 3.4 | 32.28 | 0.957 |
| HFIE (HF-only) | 7.9 | 35.18 | 0.982 | 3.3 | 32.56 | 0.958 |
| Method | Shadow Region | All | ||||
|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | |
| 0.1 | 8.1 | 34.81 | 0.980 | 3.3 | 32.39 | 0.957 |
| 0.3 | 8.7 | 34.49 | 0.980 | 3.4 | 32.11 | 0.956 |
| 0.5 | 7.9 | 35.18 | 0.982 | 3.3 | 32.56 | 0.958 |
| 0.7 | 8.1 | 35.02 | 0.981 | 3.3 | 32.45 | 0.957 |
| 0.9 | 8.4 | 34.88 | 0.979 | 3.4 | 32.43 | 0.955 |
| 1 | 8.0 | 34.54 | 0.979 | 3.3 | 32.17 | 0.955 |
| Method | Shadow Region | All | ||||
|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | |
| L2 loss | 8.2 | 34.77 | 0.981 | 3.3 | 32.29 | 0.958 |
| COC loss | 7.9 | 35.18 | 0.982 | 3.3 | 32.56 | 0.958 |
| Method | Shadow Region | All | ||||
|---|---|---|---|---|---|---|
| RMSE ↓ | PSNR ↑ | SSIM ↑ | RMSE ↓ | PSNR ↑ | SSIM ↑ | |
| (0.5, 0.5) | 7.9 | 35.18 | 0.983 | 3.3 | 32.55 | 0.958 |
| (0.6, 0.4) | 7.9 | 34.89 | 0.982 | 3.3 | 32.37 | 0.958 |
| (0.7, 0.3) | 8.4 | 34.46 | 0.978 | 3.4 | 32.15 | 0.953 |
| (0.8, 0.2) | 8.0 | 34.79 | 0.980 | 3.3 | 32.37 | 0.956 |
| (0.9, 0.1) | 8.0 | 34.87 | 0.980 | 3.3 | 32.37 | 0.957 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Shao, Y.; Zhang, Z.; Yang, M. Symmetry-Guided AB-Dynamic Feature Refinement Network for Weakly Supervised Shadow Removal. Symmetry 2026, 18, 330. https://doi.org/10.3390/sym18020330
Shao Y, Zhang Z, Yang M. Symmetry-Guided AB-Dynamic Feature Refinement Network for Weakly Supervised Shadow Removal. Symmetry. 2026; 18(2):330. https://doi.org/10.3390/sym18020330
Chicago/Turabian StyleShao, Yiming, Zhijia Zhang, and Minmin Yang. 2026. "Symmetry-Guided AB-Dynamic Feature Refinement Network for Weakly Supervised Shadow Removal" Symmetry 18, no. 2: 330. https://doi.org/10.3390/sym18020330
APA StyleShao, Y., Zhang, Z., & Yang, M. (2026). Symmetry-Guided AB-Dynamic Feature Refinement Network for Weakly Supervised Shadow Removal. Symmetry, 18(2), 330. https://doi.org/10.3390/sym18020330
