SAM2-Dehaze: Fusing High-Quality Semantic Priors with Convolutions for Single-Image Dehazing
Abstract
1. Introduction
- We design a Semantic Prior Fusion Block (SPFB), which introduces SAM2-derived semantic information at multiple stages of the U-Net backbone. This semantic fusion mechanism guides the model to highlight structural features in key regions, enhancing its perception and restoration of edges and textures.
- We design a parallel detail-enhanced and compression convolution (PDCC), which combines standard, difference, and reconstruction convolutions to enable collaborative multi-level feature modeling. This module improves high-frequency detail representation while reducing redundancy.
- We design a Semantic Alignment Block (SAB) in the reconstruction phase, which performs fine-grained semantic alignment to restore colors, textures, and boundaries of key regions, thereby ensuring semantic consistency, visual naturalness, and structural integrity of the dehazed results.
2. Related Work
2.1. Traditional Image Dehazing
2.2. Deep Learning-Based Image Dehazing
2.3. Semantic Priors for Image Dehazing
3. Proposed Model
3.1. Overview of SAM2-Dehaze Model
3.2. Semantic Prior Fusion Block
3.3. Parallel Detail-Enhanced and Compression Convolution
3.4. Semantic Alignment Block
3.5. Train Loss
4. Experiments and Results
4.1. Datasets and Evaluation Metrics
4.2. Implementation Details
4.3. Comparison with State of the Arts
4.4. Ablation Study
- (1)
- Base + SPFB → V1
- (2)
- Base + HEConv → V2
- (3)
- Base + SPFB + HEConv → V3
- (4)
- Base + SPFB + HEConv + SAB → V4
- (1)
- External fusion strategies
- SPFB-N1: The SPFB module is removed, and feature fusion is performed via simple element-wise addition.
- SPFB-N2: The input RGB image is extended to four channels by appending the semantic segmentation mask as an additional channel before feeding it into the dehazing network.
- (2)
- Internal structural variations
- SPFB-F1: The fusion with input feature F is removed, and only the intermediate semantic attention map is used within function .
- SPFB-F2: The feature extraction branch responsible for obtaining semantic priors from is removed.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| SAM | Segment Anything Model |
| SPFB | Semantic Prior Fusion Block |
| PDCC | Parallel Detail-enhanced and Compression Convolution |
| SAB | Semantic Alignment Block |
| ASM | Atmospheric Scattering Model |
| DCP | Dark Channel Prior |
| U-net | U-shaped convolutional neural network |
| VGG16 | Visual Geometry Group 16-layer network |
| ReLU | Rectified Linear Unit |
| DEConv | Detail Enhancement Convolution |
| CDC | Center Difference Convolution |
| ADC | Angle Difference Convolution |
| HDC | Horizontal Difference Convolution |
| VDC | Vertical Difference Convolution |
| SCConv | Spatial-Channel Construction Convolution |
| SRU | Spatial Reconstruction Unit |
| CRU | Channel Reconstruction Unit |
| RDB | Residual Dense Block |
| PSNR | Peak Signal-to-Noise Ratio |
| SSIM | Structural Similarity Index |
| FADE | Fog Aware Density Evaluation |
| NIQE | Natural Image Quality Evaluator |
| PIQE | Perception-based Image Quality Evaluator |
| BRISQUE | Blind/Referenceless Image Spatial Quality Evaluator |
References
- Khan, H.; Xiao, B.; Li, W.; Muhammad, N. Recent Advancement in Haze Removal Approaches. Multimed. Syst. 2022, 28, 687–710. [Google Scholar] [CrossRef]
- Ju, M.; Ding, C.; Ren, W.; Yang, Y.; Zhang, D.; Guo, Y.J. IDE: Image Dehazing and Exposure Using an Enhanced Atmospheric Scattering Model. IEEE Trans. Image Process. 2021, 30, 2180–2192. [Google Scholar] [CrossRef]
- Wang, X.; Chen, X.A.; Ren, W.; Han, Z.; Fan, H.; Tang, Y.; Liu, L. Compensation Atmospheric Scattering Model and Two-Branch Network for Single Image Dehazing. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2880–2896. [Google Scholar] [CrossRef]
- Xin, W.; Xudong, Z.; Jun, Z.; Rui, S. Image Dehazing Algorithm by Combining Light Field Multi-Cues and Atmospheric Scattering Model. Opto-Electron. Eng. 2025, 47, 190634-1. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16×16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Cui, Y.; Wang, Q.; Li, C.; Ren, W.; Knoll, A. EENet: An effective and efficient network for single image dehazing. Pattern Recognit. 2025, 158, 111074. [Google Scholar] [CrossRef]
- Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature Fusion Attention Network for Single Image Dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New York, NY, USA, 7–12 February 2020; Volume 34, Number 7. pp. 11908–11915. [Google Scholar]
- Wang, Y.; Yan, X.; Wang, F.L.; Xie, H.; Yang, W.; Zhang, X.P.; Qin, J.; Wei, M. UCL-Dehaze: Toward real-world image dehazing via unsupervised contrastive learning. IEEE Trans. Image Process. 2024, 33, 1361–1374. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 4015–4026. [Google Scholar]
- Ravi, N.; Gabeur, V.; Hu, Y.T.; Hu, R.; Ryali, C.; Ma, T.; Khedr, H.; Rädle, R.; Rolland, C.; Gustafson, L.; et al. SAM 2: Segment anything in images and videos. arXiv 2024, arXiv:2408.00714. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [CrossRef]
- Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef]
- Berman, D.; Avidan, S. Non-local image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
- Shi, F.; Jia, Z.; Zhou, Y. Zero-Shot Sand–Dust Image Restoration. Sensors 2025, 25, 1889. [Google Scholar] [CrossRef]
- Ren, W.; Pan, J.; Zhang, H.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks with holistic edges. Int. J. Comput. Vis. 2020, 128, 240–259. [Google Scholar] [CrossRef]
- Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. AOD-Net: All-in-one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4770–4778. [Google Scholar]
- Zhang, H.; Patel, V.M. Densely connected pyramid dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3194–3203. [Google Scholar]
- Li, T.; Liu, Y.; Ren, W.; Shiri, B.; Lin, W. Single Image Dehazing Using Fuzzy Region Segmentation and Haze Density Decomposition. IEEE Trans. Circuits Syst. Video Technol. 2025; in press. [Google Scholar] [CrossRef]
- Liu, X.; Ma, Y.; Shi, Z.; Chen, J. GridDehazeNet: Attention-based multi-scale network for image dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7314–7323. [Google Scholar]
- Wu, H.; Qu, Y.; Lin, S.; Zhou, J.; Qiao, R.; Zhang, Z.; Xie, Y.; Ma, L. Contrastive learning for compact single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10551–10560. [Google Scholar]
- Hong, M.; Liu, J.; Li, C.; Qu, Y. Uncertainty-driven dehazing network. Proc. Aaai Conf. Artif. Intell. 2022, 36, 906–913. [Google Scholar] [CrossRef]
- Chen, Z.; He, Z.; Lu, Z.M. DEA-Net: Single image dehazing based on detail-enhanced convolution and content-guided attention. IEEE Trans. Image Process. 2024, 33, 1002–1015. [Google Scholar] [CrossRef]
- Wang, X.; Yang, G.; Ye, T.; Liu, Y. Dehaze-RetinexGAN: Real-World Image Dehazing via Retinex-based Generative Adversarial Network. Proc. Aaai Conf. Artif. Intell. 2025, 39, 7997–8005. [Google Scholar] [CrossRef]
- Son, D.M.; Huang, J.R.; Lee, S.H. Image Sand–Dust Removal Using Reinforced Multiscale Image Pair Training. Sensors 2025, 25, 1234. [Google Scholar] [CrossRef]
- Zhang, S.; Ren, W.; Tan, X.; Wang, Z.-J.; Liu, Y.; Zhang, J.; Zhang, X.; Cao, X. Semantic-aware dehazing network with adaptive feature fusion. IEEE Trans. Cybern. 2021, 53, 454–467. [Google Scholar] [CrossRef] [PubMed]
- Cheng, Z.; You, S.; Ila, V.; Li, H. Semantic single-image dehazing. arXiv 2018, arXiv:1804.05624. [Google Scholar] [CrossRef]
- Song, Y.; Yang, C.; Shen, Y.; Wang, P.; Huang, Q.; Kuo, C.C.J. SPG-Net: Segmentation prediction and guidance network for image inpainting. arXiv 2018, arXiv:1805.03356. [Google Scholar] [CrossRef]
- Zhang, Q.; Liu, X.; Li, W.; Chen, H.; Liu, J.; Hu, J.; Xiong, Z.; Yuan, C.; Wang, Y. Distilling semantic priors from SAM to efficient image restoration models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 25409–25419. [Google Scholar]
- Li, S.; Liu, M.; Zhang, Y.; Chen, S.; Li, H.; Dou, Z.; Chen, H. SAM-Deblur: Let Segment Anything boost image deblurring. In Proceedings of the ICASSP 2024–IEEE International Conference on Acoustics, Speech and Signal Processing, Seoul, Republic of Korea, 14–19 April 2024; pp. 2445–2449. [Google Scholar]
- Liu, H.; Shao, M.; Wan, Y.; Liu, Y.; Shang, K. SeBIR: Semantic-guided burst image restoration. Neural Netw. 2025, 181, 106834. [Google Scholar] [CrossRef]
- Li, J.; Wen, Y.; He, L. ScConv: Spatial and channel reconstruction convolution for feature redundancy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 6153–6162. [Google Scholar]
- Wang, Y.; Xiong, J.; Yan, X.; Wei, M. USCFormer: Unified transformer with semantically contrastive learning for image dehazing. IEEE Trans. Intell. Transp. Syst. 2023, 24, 11321–11333. [Google Scholar] [CrossRef]
- Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 2016, 3, 47–57. [Google Scholar] [CrossRef]
- Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef]
- Ancuti, C.O.; Ancuti, C.; Sbert, M.; Timofte, R. DENSE-HAZE: A benchmark for image dehazing with dense-haze and haze-free images. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 1014–1018. [Google Scholar]
- Ancuti, C.O.; Ancuti, C.; Timofte, R. NH-HAZE: An image dehazing benchmark with non-homogeneous hazy and haze-free images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 444–445. [Google Scholar]
- Li, L.; Song, S.; Lv, M.; Jia, Z.; Ma, H. Multi-Focus Image Fusion Based on Fractal Dimension and Parameter Adaptive Unit-Linking Dual-Channel PCNN in Curvelet Transform Domain. Fractal Fract. 2025, 9, 157. [Google Scholar] [CrossRef]
- Lv, M.; Song, S.; Jia, Z.; Li, L.; Ma, H. Multi-Focus Image Fusion Based on Dual-Channel Rybak Neural Network and Consistency Verification in NSCT Domain. Fractal Fract. 2025, 9, 432. [Google Scholar] [CrossRef]
- Cao, Z.H.; Liang, Y.J.; Deng, L.J.; Vivone, G. An Efficient Image Fusion Network Exploiting Unifying Language and Mask Guidance. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 9845–9862. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
- Choi, L.K.; You, J.; Bovik, A.C. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans. Image Process. 2015, 24, 3888–3901. [Google Scholar] [CrossRef] [PubMed]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Venkatanath, N.; Praneeth, D.; Sumohana, S.C.; Swarup, S.M. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- Guo, C.L.; Yan, Q.; Anwar, S.; Cong, R.; Ren, W.; Li, C. Image dehazing transformer with transmission-aware 3D position embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5812–5820. [Google Scholar]
- Shen, H.; Zhao, Z.Q.; Zhang, Y.; Zhang, Z. Mutual information-driven triple interaction network for efficient image dehazing. In Proceedings of the 31st ACM International Conference on Multimedia, Hyderabad, India, 6–11 April 2023; pp. 7–16. [Google Scholar]
- Wu, R.Q.; Duan, Z.P.; Guo, C.L.; Chai, Z.; Li, C. RIDCP: Revitalizing real image dehazing via high-quality codebook priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22282–22291. [Google Scholar]
- Zheng, Y.; Zhan, J.; He, S.; Dong, J.; Du, Y. Curricular contrastive regularization for physics-aware single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 5785–5794. [Google Scholar]
- Lu, L.; Xiong, Q.; Xu, B.; Chu, D. MixDehazeNet: Mix structure block for image dehazing network. In Proceedings of the 2024 International Joint Conference on Neural Networks (IJCNN), Yokohama, Japan, 30 June–5 July 2024; pp. 1–10. [Google Scholar]
- Fu, J.; Liu, S.; Liu, Z.; Guo, C.L.; Park, H.; Wu, R.; Wang, G.; Li, C. Iterative Predictor-Critic Code Decoding for Real-World Image Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 11–15 June 2025; pp. 12700–12709. [Google Scholar]













| Datasets | Train (GT/Hazy) | Test (GT/Hazy) | Train Epochs | Pre-Trained Weights |
|---|---|---|---|---|
| RESIDE | ITS (1399/13,990) | SOTS-indoor (500/500) | 500 | – |
| OTS (8970/313,950) | SOTS-outdoor (500/500) | 50 | – | |
| RTTS (4322) | 50 | – | ||
| Dense-Haze | Dense-Haze (45/45) | Dense-Haze (5/5) | 5000 | ITS–L |
| NH-Haze | NH-Haze (45/45) | NH-Haze (5/5) | 5000 | ITS–L |
| Model Name | Num. of Blocks | Embedding Dims |
|---|---|---|
| SAM2-Dehaze-S | [2, 2, 4, 2, 2] | [24, 48, 96, 48, 24] |
| SAM2-Dehaze-B | [4, 4, 8, 4, 4] | [24, 48, 96, 48, 24] |
| SAM2-Dehaze-L | [8, 8, 16, 8, 8] | [24, 48, 96, 48, 24] |
| Method | SOTS-Indoor | SOTS-Outdoor | ||||
|---|---|---|---|---|---|---|
| PSNR↑ | SSIM↑ | CIEDE2000↓ | PSNR↑ | SSIM↑ | CIEDE2000↓ | |
| DCP (TPAMI’10) | 16.61 | 0.855 | 6.7998 | 19.14 | 0.861 | 11.0133 |
| MSCNN (ECCV’16) | 19.84 | 0.833 | 7.6254 | 22.06 | 0.908 | 6.2546 |
| AOD-Net (ICCV’17) | 20.51 | 0.816 | 8.0860 | 24.14 | 0.920 | 8.9375 |
| GridDehazeNet (ICCV’19) | 32.16 | 0.984 | 1.7784 | 30.86 | 0.960 | 2.2747 |
| FFA-Net (AAAI’20) | 36.39 | 0.989 | 1.1645 | 33.38 | 0.984 | 2.4797 |
| AECR-Net (CVPR’21) | 37.17 | 0.990 | 1.1423 | — | — | — |
| Dehamer (CVPR’22) | 36.63 | 0.988 | 0.9881 | 35.18 | 0.986 | 0.9676 |
| MIT-Net (MM’23) | 40.23 | 0.992 | 0.9920 | 35.18 | 0.988 | 0.9800 |
| RIDCP (CVPR’23) | 18.36 | 0.757 | 9.9759 | 21.62 | 0.833 | 7.9011 |
| C2PNet (CVPR’23) | 42.46 | 0.995 | 0.6997 | 36.68 | 0.990 | 0.9762 |
| DEA-Net (TIP’24) | 41.21 | 0.992 | 0.7994 | 36.24 | 0.989 | 0.9771 |
| SAM2-Dehaze-S | 41.41 | 0.996 | 0.7766 | 35.62 | 0.982 | 1.1625 |
| SAM2-Dehaze-B | 41.56 | 0.996 | 0.7526 | 35.69 | 0.985 | 0.9956 |
| SAM2-Dehaze-L | 42.83 | 0.997 | 0.6929 | 36.22 | 0.989 | 0.9823 |
| Method | Overhead | ||
|---|---|---|---|
| Param. (M) | MACs (G) | Latency (ms) | |
| GridDehazeNet (ICCV’19) | 0.96 | 21.43 | 39.69 |
| FFA-Net (AAAI’20) | 4.45 | 287.53 | 164.94 |
| AECR-Net (CVPR’21) | 2.61 | 52.20 | 36.26 |
| Dehamer (CVPR’22) | 132.45 | 48.93 | — |
| MIT-Net (MM’23) | 2.73 | 16.54 | 14.57 |
| C PNet (CVPR’23) | 7.17 | 460.95 | 173.86 |
| DEA-Net (TIP’24) | 3.65 | 34.04 | 16.12 |
| SAM2-Dehaze-S | 5.06 | 43.07 | 97.93 |
| SAM2-Dehaze-B | 9.78 | 71.36 | 242.21 |
| SAM2-Dehaze-L | 19.24 | 127.96 | 519.89 |
| Method | Dense-Haze | NH-Haze | ||||
|---|---|---|---|---|---|---|
| PSNR↑ | SSIM↑ | CIEDE2000↓ | PSNR↑ | SSIM↑ | CIEDE2000↓ | |
| DCP (TPAMI’10) | 11.06 | 0.417 | 23.5067 | 13.28 | 0.482 | 18.0389 |
| AOD-Net (ICCV’17) | 12.82 | 0.468 | 24.0294 | 15.69 | 0.573 | 19.3886 |
| FFA-Net (AAAI’20) | 16.24 | 0.561 | 13.8080 | 16.29 | 0.562 | 13.1681 |
| DeHamer (CVPR’22) | 16.63 | 0.587 | 12.8506 | 20.66 | 0.686 | 9.1162 |
| MIT-Net (MM’23) | 16.97 | 0.623 | 12.5450 | 21.25 | 0.712 | 8.5118 |
| RIDCP (CVPR’23) | 8.09 | 0.438 | 32.2540 | 12.27 | 0.503 | 20.2104 |
| MixDehazeNet-L (IJCNN’24) | 15.90 | 0.579 | 12.0986 | 21.01 | 0.827 | 9.5122 |
| SAM2-Dehaze-L | 20.61 | 0.725 | 8.5909 | 22.02 | 0.831 | 8.5108 |
| Method | FADE ↓ | NIQE ↓ | PIQE ↓ | BRISQUE ↓ |
|---|---|---|---|---|
| GridDehazeNet (ICCV’19) | 1.72 | 4.85 | 23.85 | 29.73 |
| FFA-Net (AAAI’20) | 2.07 | 4.93 | 24.59 | 34.44 |
| Dehamer (CVPR’22) | 1.92 | 4.91 | 23.31 | 34.55 |
| C2PNet (CVPR’23) | 2.06 | 5.03 | 25.05 | 34.80 |
| MIT-Net (MM’23) | 1.97 | 4.92 | 23.25 | 34.37 |
| DEA-Net (TIP’24) | 1.90 | 4.92 | 24.95 | 31.99 |
| IPC-Dehaze (CVPR’25) | 1.15 | 4.08 | 12.34 | 24.79 |
| SAM2-Dehaze (Ours) | 1.71 | 4.75 | 23.23 | 30.09 |
| Variants | Baseline | V1 | V2 | V3 | V4 |
|---|---|---|---|---|---|
| MixDehazeNet-S | ✓ | ✓ | ✓ | ✓ | ✓ |
| SPFB | w/o | ✓ | w/o | ✓ | ✓ |
| HEConv | w/o | w/o | ✓ | ✓ | ✓ |
| SAB | w/o | w/o | w/o | w/o | ✓ |
| PSNR | 39.47 | 41.24 | 40.26 | 41.37 | 41.41 |
| SSIM | 0.995 | 0.996 | 0.995 | 0.996 | 0.996 |
| Variants | Location | ||||
|---|---|---|---|---|---|
| Method | PSNR | SSIM | Block | PSNR | SSIM |
| SPFB-N1 | 38.91 | 0.994 | G-1 | 40.16 | 0.995 |
| SPFB-N2 | 40.26 | 0.995 | G-2 | 40.66 | 0.995 |
| SPFB-F1 | 41.14 | 0.996 | G-3 | 40.89 | 0.995 |
| SPFB-F2 | 41.24 | 0.996 | G-4 | 41.31 | 0.996 |
| SPFB | 41.41 | 0.996 | G-5 | 41.41 | 0.996 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, S.; Wang, J.; Huo, Z. SAM2-Dehaze: Fusing High-Quality Semantic Priors with Convolutions for Single-Image Dehazing. Sensors 2025, 25, 7097. https://doi.org/10.3390/s25227097
Li S, Wang J, Huo Z. SAM2-Dehaze: Fusing High-Quality Semantic Priors with Convolutions for Single-Image Dehazing. Sensors. 2025; 25(22):7097. https://doi.org/10.3390/s25227097
Chicago/Turabian StyleLi, Sen, Jianchao Wang, and Zhanqiang Huo. 2025. "SAM2-Dehaze: Fusing High-Quality Semantic Priors with Convolutions for Single-Image Dehazing" Sensors 25, no. 22: 7097. https://doi.org/10.3390/s25227097
APA StyleLi, S., Wang, J., & Huo, Z. (2025). SAM2-Dehaze: Fusing High-Quality Semantic Priors with Convolutions for Single-Image Dehazing. Sensors, 25(22), 7097. https://doi.org/10.3390/s25227097

