WCGNet: A Weather Codebook and Gating Fusion for Robust 3D Detection Under Adverse Conditions
Abstract
1. Introduction
- A robust 3D object detection network based on the Weather Codebook module and the Weather-Aware Gating Fusion module is proposed. Without relying on additional sensors, the method effectively mitigates the degradation of LiDAR point cloud quality under foggy conditions, significantly enhancing the stability and generalization capability of the detection system in adverse weather.
- A Weather Codebook module is designed to jointly train on paired clear and foggy scenes, enabling the network to learn, store, and generalize clear-scene reference features across diverse weather conditions. This mechanism facilitates the recall and reconstruction of clear-scene representations and enhances the network’s robustness and representational capacity when processing foggy inputs.
- A Weather-Aware Gating Fusion module is introduced, which adaptively regulates the degree of feature enhancement based on features with clarity-aware weights predicted by a spatial attention module and a gating module. Additionally, a multi-head cross-attention mechanism is incorporated to effectively fuse fog-aware reference features with clear-scene information, producing feature representations that are robust to weather-induced variations.
- A synthetic foggy point cloud dataset, nuScenes-fog, is constructed based on the nuScenes dataset. Extensive experiments are conducted on both nuScenes and nuScenes-fog, demonstrating the proposed method’s superior robustness and detection performance under foggy conditions. Furthermore, evaluation on the STF multi-weather dataset confirms the model’s strong adaptability across diverse weather scenarios.
2. Related Works
2.1. Real LiDAR Tests Under Adverse Weather
2.2. LiDAR-Based 3D Object Detection
2.3. Point Cloud Processing Methods Under Adverse Weather
2.4. Feature Quantization
3. Method
3.1. The Weather Codebook Module
3.2. Weather-Aware Gating Fusion
4. Experiments
4.1. Dataset
4.2. Implementation Details
4.3. Quantitative Comparisons
4.4. Visualization Results
4.5. Ablation Studies
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Xu, X.; Kong, L.; Shuai, H.; Pan, L.; Liu, Z.; Liu, Q. LiMoE: Mixture of LiDAR Representation Learners from Automotive Scenes. arXiv 2025, arXiv:2501.04004. [Google Scholar]
- Zhong, S.; Chen, H.; Qi, Y.; Feng, D.; Chen, Z.; Wu, J.; Wen, W.; Liu, M. Colrio: Lidar-ranging-inertial centralized state estimation for robotic swarms. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; pp. 3920–3926. [Google Scholar]
- Xing, Z.; Ma, G.; Wang, L.; Yang, L.; Guo, X.; Chen, S. Towards visual interaction: Hand segmentation by combining 3D graph deep learning and laser point cloud for intelligent rehabilitation. IEEE Internet Things J. 2025, 12, 21328–21338. [Google Scholar] [CrossRef]
- Ye, H.; Sunderraman, R.; Ji, S. UAV3D: A Large-scale 3D Perception Benchmark for Unmanned Aerial Vehicles. arXiv 2024, arXiv:2410.11125. [Google Scholar]
- Dong, Y.; Kang, C.; Zhang, J.; Zhu, Z.; Wang, Y.; Yang, X.; Su, H.; Wei, X.; Zhu, J. Benchmarking robustness of 3d object detection to common corruptions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 1022–1032. [Google Scholar]
- Shinde, A.; Sharma, G.; Pattanaik, M.; Singh, S.N. Effect of Fog Particle Size Distribution on 3D Object Detection Under Adverse Weather Conditions. arXiv 2024, arXiv:2408.01085. [Google Scholar] [CrossRef]
- Kutila, M.; Pyykönen, P.; Holzhüter, H.; Colomb, M.; Duthon, P. Automotive LiDAR performance verification in fog and rain. In Proceedings of the 2018 21st International conference on intelligent transportation systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1695–1701. [Google Scholar]
- Wichmann, M.; Kamil, M.; Frederiksen, A.; Kotzur, S.; Scherl, M. Long-term investigations of weather influence on direct time-of-flight LiDAR at 905 nm. IEEE Sens. J. 2021, 22, 2024–2036. [Google Scholar] [CrossRef]
- Vriesman, D.; Thoresz, B.; Steinhauser, D.; Zimmer, A.; Britto, A.; Brandmeier, T. An experimental analysis of rain interference on detection and ranging sensors. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–5. [Google Scholar]
- Cassanelli, D.; Cattini, S.; Medici, L.; Ferrari, L.; Rovati, L. A simple experimental method to estimate and benchmark automotive LIDARs performance in fog. Acta IMEKO 2024, 13, 1–8. [Google Scholar] [CrossRef]
- Sezgin, F.; Vriesman, D.; Steinhauser, D.; Lugner, R.; Brandmeier, T. Safe autonomous driving in adverse weather: Sensor evaluation and performance monitoring. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–6. [Google Scholar]
- Charron, N.; Phillips, S.; Waslander, S.L. De-noising of lidar point clouds corrupted by snowfall. In Proceedings of the 2018 15th Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 8–10 May 2018; pp. 254–261. [Google Scholar]
- Kurup, A.; Bos, J. Dsor: A scalable statistical filter for removing falling snow from lidar point clouds in severe winter weather. arXiv 2021, arXiv:2109.07078. [Google Scholar] [CrossRef]
- Kilic, V.; Hegde, D.; Cooper, A.B.; Patel, V.M.; Foster, M. Lidar light scattering augmentation (lisa): Physics-based simulation of adverse weather conditions for 3D object detection. In Proceedings of the ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 6–11 April 2025; pp. 1–5. [Google Scholar]
- Hahner, M.; Sakaridis, C.; Dai, D.; Van Gool, L. Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 15283–15292. [Google Scholar]
- Yin, T.; Zhou, X.; Krahenbuhl, P. Center-based 3D object detection and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11784–11793. [Google Scholar]
- Bai, X.; Hu, Z.; Zhu, X.; Huang, Q.; Chen, Y.; Fu, H.; Tai, C.L. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1090–1099. [Google Scholar]
- Van Den Oord, A.; Vinyals, O. Neural discrete representation learning. arXiv 2017, arXiv:1711.00937. [Google Scholar]
- Zhu, X.; Cheng, D.; Zhang, Z.; Lin, S.; Dai, J. An empirical study of spatial attention mechanisms in deep networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 6688–6697. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. [Google Scholar]
- Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11682–11692. [Google Scholar]
- Shi, S.; Wang, X.; Li, H. Pointrcnn: 3D object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 770–779. [Google Scholar]
- Yang, Z.; Sun, Y.; Liu, S.; Shen, X.; Jia, J. Std: Sparse-to-dense 3D object detector for point cloud. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1951–1960. [Google Scholar]
- Yang, Z.; Sun, Y.; Liu, S.; Jia, J. 3dssd: Point-based 3D single stage object detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11040–11048. [Google Scholar]
- Yan, Y.; Mao, Y.; Li, B. Second: Sparsely embedded convolutional detection. Sensors 2018, 18, 3337. [Google Scholar] [CrossRef] [PubMed]
- Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar]
- Park, J.I.; Park, J.; Kim, K.S. Fast and accurate desnowing algorithm for LiDAR point clouds. IEEE Access 2020, 8, 160202–160212. [Google Scholar] [CrossRef]
- Le, M.H.; Cheng, C.H.; Liu, D.G.; Nguyen, T.T. An adaptive group of density outlier removal filter: Snow particle removal from lidar data. Electronics 2022, 11, 2993. [Google Scholar] [CrossRef]
- Wang, W.; You, X.; Chen, L.; Tian, J.; Tang, F.; Zhang, L. A scalable and accurate de-snowing algorithm for LiDAR point clouds in winter. Remote Sens. 2022, 14, 1468. [Google Scholar] [CrossRef]
- Han, S.J.; Lee, D.; Min, K.W.; Choi, J. RGOR: De-noising of LiDAR point clouds with reflectance restoration in adverse weather. In Proceedings of the 2023 14th International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 11–13 October 2023; pp. 1844–1849. [Google Scholar]
- Yu, M.Y.; Vasudevan, R.; Johnson-Roberson, M. Lisnownet: Real-time snow removal for lidar point clouds. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 6820–6826. [Google Scholar]
- Seppänen, A.; Ojala, R.; Tammi, K. 4denoisenet: Adverse weather denoising from adjacent point clouds. IEEE Robot. Autom. Lett. 2022, 8, 456–463. [Google Scholar] [CrossRef]
- Hahner, M.; Sakaridis, C.; Bijelic, M.; Heide, F.; Yu, F.; Dai, D.; Van Gool, L. Lidar snowfall simulation for robust 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 16364–16374. [Google Scholar]
- Fang, J.; Zuo, X.; Zhou, D.; Jin, S.; Wang, S.; Zhang, L. Lidar-aug: A general rendering-based augmentation framework for 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4710–4720. [Google Scholar]
- Zhan, J.; Liu, T.; Li, R.; Zhang, J.; Zhang, Z.; Chen, Y. Real-aug: Realistic scene synthesis for lidar augmentation in 3d object detection. arXiv 2023, arXiv:2305.12853. [Google Scholar]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Razavi, A.; Van den Oord, A.; Vinyals, O. Generating diverse high-fidelity images with vq-vae-2. arXiv 2019, arXiv:1906.00446. [Google Scholar] [CrossRef]
- Vahdat, A.; Andriyash, E.; Macready, W. Dvae#: Discrete variational autoencoders with relaxed boltzmann priors. arXiv 2018, arXiv:1805.07445. [Google Scholar] [CrossRef]
- Zhou, S.; Li, L.; Zhang, X.; Zhang, B.; Bai, S.; Sun, M.; Zhao, Z.; Lu, X.; Chu, X. Lidar-ptq: Post-training quantization for point cloud 3d object detection. arXiv 2024, arXiv:2401.15865. [Google Scholar]
- Zhu, B.; Jiang, Z.; Zhou, X.; Li, Z.; Yu, G. Class-balanced grouping and sampling for point cloud 3d object detection. arXiv 2019, arXiv:1908.09492. [Google Scholar] [CrossRef]
- Erabati, G.K.; Araujo, H. DDet3D: Embracing 3D object detector with diffusion. Appl. Intell. 2025, 55, 283. [Google Scholar] [CrossRef]
- Chen, Y.; Liu, J.; Zhang, X.; Qi, X.; Jia, J. Largekernel3d: Scaling up kernels in 3D sparse cnns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Denver, CO, USA, 3–7 June 2023; pp. 13488–13498. [Google Scholar]






| Methods | mAP (%) | NDS (%) | Car | Truck | Bus | Trailer | C.V. | Ped. | Mot. | Byc. | T.C. | Bar. |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Pointpillars [27] | 30.5 | 45.3 | 68.4 | 23.0 | 28.2 | 23.4 | 4.1 | 59.7 | 27.4 | 1.1 | 30.8 | 38.9 |
| 3DSSD [25] | 42.6 | 56.4 | 81.2 | 47.2 | 61.4 | 30.5 | 12.6 | 70.2 | 36.0 | 8.6 | 31.1 | 47.9 |
| CBGS [41] | 52.8 | 63.3 | 81.1 | 48.5 | 54.9 | 42.9 | 10.5 | 80.1 | 51.5 | 22.3 | 70.9 | 65.7 |
| CenterPoint [16] | 58.0 | 65.5 | 84.6 | 51.0 | 60.2 | 53.2 | 17.5 | 83.4 | 53.7 | 28.7 | 76.7 | 70.9 |
| DDet3D [42] | 63.2 | 66.6 | - | - | - | - | - | - | - | - | - | - |
| LargeKernel3D [43] | 65.4 | 70.6 | 85.5 | 53.8 | 64.4 | 59.5 | 29.7 | 85.9 | 72.7 | 46.8 | 79.9 | 75.5 |
| TransFusion-L [17] | 65.5 | 70.2 | 86.2 | 56.7 | 66.3 | 58.8 | 28.2 | 86.1 | 68.3 | 44.2 | 82.0 | 78.2 |
| WCGNet (ours) | 65.7 | 70.5 | 86.4 | 57.0 | 66.6 | 58.5 | 27.8 | 85.7 | 68.8 | 45.3 | 82.4 | 78.4 |
| Method | nuScenes | nuScenes-fog | ||
|---|---|---|---|---|
| mAP (%) | NDS (%) | mAP (%) | NDS (%) | |
| Pointpillars [27] | 30.5 | 45.3 | 23.4 (−6.9) | 39.5 (−5.8) |
| 3DSSD [25] | 42.6 | 56.4 | 36.9 (−5.7) | 51.0 (−5.4) |
| CBGS [41] | 52.8 | 63.3 | 47.3 (−5.5) | 58.4 (−4.9) |
| CenterPoint [16] | 58.0 | 65.5 | 53.2 (−4.8) | 61.3 (−4.2) |
| DDet3D [42] | 63.2 | 66.6 | 59.3 (−4.9) | 62.8 (−3.8) |
| LargeKernel3D [43] | 65.4 | 70.6 | 62.4 (−3.0) | 67.8 (−2.8) |
| TransFusion-L [17] | 65.5 | 70.2 | 62.1 (−3.4) | 67.5 (−2.7) |
| WCGNet (ours) | 65.7 | 70.5 | 63.1 (−2.6) | 68.2 (−2.3) |
| Baseline | Methods | Clear | Light Fog | Dense Fog | Light Snowfall | Heavy Snowfall |
|---|---|---|---|---|---|---|
| SECOND [26] | None | 42.10 | 38.02 | 33.89 | 37.77 | 36.08 |
| Fog [15] | 42.34 | 38.85 | 34.03 | 37.31 | 36.08 | |
| DROR [12] | 38.96 | 34.56 | 31.13 | 35.09 | 35.04 | |
| LISA [14] | 41.75 | 38.14 | 33.93 | 38.07 | 35.90 | |
| Snowfall [34] | 42.31 | 38.25 | 34.12 | 37.11 | 37.03 | |
| WCGNet (ours) | 42.35 | 38.97 | 34.26 | 37.25 | 36.23 | |
| PointPillars [27] | None | 38.24 | 34.32 | 29.77 | 34.09 | 30.85 |
| Fog [15] | 37.74 | 35.29 | 30.28 | 35.38 | 30.39 | |
| DROR [12] | 34.72 | 30.67 | 28.31 | 30.99 | 29.32 | |
| LISA [14] | 37.92 | 33.48 | 28.46 | 33.87 | 28.70 | |
| Snowfall [34] | 38.17 | 35.41 | 29.83 | 34.18 | 31.38 | |
| WCGNet (ours) | 38.09 | 35.43 | 30.52 | 35.43 | 30.49 | |
| centerpoint [16] | None | 44.11 | 41.34 | 37.93 | 40.91 | 38.68 |
| Fog [15] | 43.79 | 41.73 | 38.22 | 40.82 | 38.82 | |
| DROR [12] | 40.80 | 39.19 | 37.03 | 38.69 | 38.42 | |
| LISA [14] | 44.70 | 41.28 | 37.36 | 40.26 | 38.11 | |
| Snowfall [34] | 44.33 | 42.25 | 39.19 | 41.23 | 40.14 | |
| WCGNet (ours) | 44.15 | 42.37 | 38.95 | 41.45 | 39.23 |
| Method | WCM | Gate | MA | Fusion | mAP (%) | mAP* (%) |
|---|---|---|---|---|---|---|
| (a) | 65.5 | 62.1 | ||||
| (b) | ✓ | 64.9 (−0.6) | 62.5 (+0.4) | |||
| (c) | ✓ | ✓ | 65.8 (+0.3) | 62.3(+0.2) | ||
| (d) | ✓ | ✓ | 65.3 (−0.2) | 62.5 (+0.4) | ||
| (e) | ✓ | ✓ | ✓ | 65.6 (+0.1) | 62.9 (+0.8) | |
| (f) | ✓ | ✓ | ✓ | 65.5 (+0.0) | 62.6 (+0.5) | |
| (g) | ✓ | ✓ | ✓ | ✓ | 65.7 (+0.2) | 63.1 (+1.0) |
| N | 2048 | 3072 | 4096 | 5120 |
|---|---|---|---|---|
| mAP (%) | 58.1 (−7.6) | 63.4 (−2.3) | 65.7 | 65.3 (−0.4) |
| mAP* (%) | 56.5 (−6.6) | 61.2 (−1.9) | 63.1 | 62.7 (−0.4) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, W.; Yan, F.; Wang, N.; He, J.; Wu, Y. WCGNet: A Weather Codebook and Gating Fusion for Robust 3D Detection Under Adverse Conditions. Electronics 2025, 14, 4379. https://doi.org/10.3390/electronics14224379
Chen W, Yan F, Wang N, He J, Wu Y. WCGNet: A Weather Codebook and Gating Fusion for Robust 3D Detection Under Adverse Conditions. Electronics. 2025; 14(22):4379. https://doi.org/10.3390/electronics14224379
Chicago/Turabian StyleChen, Wenfeng, Fei Yan, Ning Wang, Jiale He, and Yiqi Wu. 2025. "WCGNet: A Weather Codebook and Gating Fusion for Robust 3D Detection Under Adverse Conditions" Electronics 14, no. 22: 4379. https://doi.org/10.3390/electronics14224379
APA StyleChen, W., Yan, F., Wang, N., He, J., & Wu, Y. (2025). WCGNet: A Weather Codebook and Gating Fusion for Robust 3D Detection Under Adverse Conditions. Electronics, 14(22), 4379. https://doi.org/10.3390/electronics14224379

