SCOPE: Spatial Context-Aware Pointcloud Encoder for Denoising Under the Adverse Weather Conditions
Abstract
1. Introduction
- We introduce a network that effectively captures the spatial characteristics among points within a voxel. This allows us to discern geometric relationships between points even in sparse conditions. Additionally, by emphasizing the differences between clusters through contrastive learning, we facilitate effective segmentation learning.
- To facilitate the acquisition of point-wise annotated data under adverse weather conditions, we propose a noise point acquisition and labeling strategy. Leveraging this method, we collect point cloud scenes captured in real-world adverse weather environments—including snow, rain, and fog—and construct a dataset comprising over 800 scenes with fine-grained point-wise annotations.
- We train and evaluate SCOPE on the proposed dataset to validate its effectiveness. The experimental results demonstrate that our network successfully detects noise points across various challenging weather scenarios, highlighting its robustness and generalization capability in adverse environments.
2. Related Works
2.1. Adverse Weather on LiDAR Point Cloud
2.2. Point Cloud Semantic Segmentation
3. Point-Wise Labeling Strategy for Adverse Weather Dataset
3.1. Background
3.2. Data Collection
3.3. Data Annotation
4. Methodology
4.1. Problem Formulation
4.2. Voxel Feature Extractor
4.3. Spatial Attentive Pooling
4.4. Embedding Aggregation
5. Experiments
5.1. Data Construction
5.2. Implementation Details
- Random Rotation: applied around the Z-axis to simulate changes in heading direction.
- Anisotropic Scaling: independent scaling along each axis to emulate distortions from LiDAR sensor calibration drift.
- Random Flipping: performed along the XY-plane to introduce further diversity in spatial configurations.
5.3. Evaluation Metrics
6. Results
6.1. Qualitative Results
6.2. Quantitative Results
6.3. Ablation Study
7. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Tang, J.; Tian, F.-P.; Feng, W.; Li, J.; Tan, P. Learning guided convolutional network for depth completion. IEEE Trans. Image Process. 2021, 30, 1116–1129. [Google Scholar] [CrossRef]
- Gao, Y.; Li, W.; Wang, J.; Zhang, M.; Tao, R. Relationship learning from multisource images via spatial-spectral perception network. IEEE Trans. Image Process. 2024, 33, 3271–3284. [Google Scholar] [CrossRef]
- Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote. Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
- Zhao, X.; Wen, C.; Prakhya, S.M.; Yin, H.; Zhou, R.; Sun, Y.; Xu, J.; Bai, H.; Wang, Y. Multi-modal features and accurate place recognition with robust optimization for lidar-visual-inertial slam. IEEE Trans. Instrum. Meas. 2024, 73, 1–14. [Google Scholar]
- Eisl, N.; Halperin, D. Point cloud based scene segmentation: A survey. Comput. Graph. Forum 2024, 43, e14234. [Google Scholar]
- Park, J.; Kim, C.; Jo, K. PCSCNet: Fast 3D semantic segmentation of LiDAR point cloud for autonomous car using point convolution and sparse convolution network. Eng. Appl. Artif. Intell. 2022, 113, 104983. [Google Scholar]
- Montalvo, J.; Carballeira, P.; García-Martín, A. SynthmanticLiDAR: A synthetic dataset for semantic segmentation on LiDAR imaging. Comput. Vis. Image Underst. 2024, 239, 103897. [Google Scholar]
- Godfrey, J.; Kumar, V.; Subramanian, S.C. Evaluation of flash lidar in adverse weather conditions towards active road vehicle safety. IEEE Sens. J. 2023, 23, 17234–17242. [Google Scholar] [CrossRef]
- Sezgin, F.; Vriesman, D.; Steinhauser, D.; Lugner, R.; Brandmeier, T. Safe autonomous driving in adverse weather: Sensor evaluation and performance monitoring. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–6. [Google Scholar]
- Li, S.; Wang, Z.; Juefei-Xu, F.; Guo, Q.; Li, X.; Ma, L. Common corruption robustness of point cloud detectors: Benchmark and enhancement. IEEE Trans. Multimed. 2023, 25, 8520–8532. [Google Scholar]
- Vattem, T.; Sebastian, G.; Lukic, L. Rethinking lidar object detection in adverse weather conditions. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 5093–5099. [Google Scholar]
- Wang, J.; Wu, Z.; Liang, Y.; Tang, J.; Chen, H. Perception methods for adverse weather based on vehicle infrastructure cooperation system: A review. Sensors 2024, 24, 374. [Google Scholar] [CrossRef] [PubMed]
- Xiao, A.; Huang, J.; Xuan, W.; Ren, R.; Liu, K.; Guan, D.; Saddik, A.E.; Lu, S.; Xing, E.P. 3d semantic segmentation in the wild: Learning generalized models for adverse-condition point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 9382–9392. [Google Scholar]
- Teufel, S.; Volk, G.; Bernuth, A.V.; Bringmann, O. Simulating realistic rain, snow, and fog variations for comprehensive performance characterization of lidar perception. In Proceedings of the IEEE 95th Vehicular Technology Conference (VTC2022-Spring), Helsinki, Finland, 19–22 June 2022; pp. 1–7. [Google Scholar]
- Xie, Y.; Tian, J.; Zhu, X.X. Linking points with labels in 3D: A review of point cloud semantic segmentation. IEEE Geosci. Remote. Sens. Mag. 2020, 8, 38–59. [Google Scholar] [CrossRef]
- Sarker, S.; Rana, M.M.; Hasan, M.N. A comprehensive overview of deep learning techniques for 3D point cloud classification and semantic segmentation. Mach. Learn. Appl. 2022, 9, 100362. [Google Scholar] [CrossRef]
- Hegde, S.; Gangisetty, S. PIG-Net: Inception based deep learning architecture for 3D point cloud segmentation. IEEE Access 2021, 9, 107771–107781. [Google Scholar] [CrossRef]
- Hahner, M.; Sakaridis, C.; Dai, D.; Gool, L.V. Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 15283–15292. [Google Scholar]
- Lee, J.; Kim, S.; Park, H.; Lee, Y. GAN-based LiDAR translation between sunny and adverse weather conditions for autonomous driving. Sensors 2022, 22, 5287. [Google Scholar] [CrossRef]
- Kim, I.I.; McArthur, B.; Korevaar, E.J. Comparison of laser beam propagation at 785 nm and 1550 nm in fog and haze for optical wireless communications. SPIE 2001, 4214, 26–37. [Google Scholar]
- Rasshofer, R.H.; Spies, M.; Spies, H. Influences of weather phenomena on automotive laser radar systems. Adv. Radio Sci. 2011, 9, 49–60. [Google Scholar] [CrossRef]
- Park, J.; Jang, J.; Kim, Y.; Jo, K. No thing, nothing: Highlighting safety-critical classes for robust LiDAR semantic segmentation in adverse weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 12345–12354. [Google Scholar]
- Yang, H.; Wang, S.; Liu, Z.; Chen, C. Rethinking range-view LiDAR segmentation in adverse weather. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5432–5447. [Google Scholar]
- Chae, H.; Lee, C.; Park, S.; Kim, J. Towards robust 3D object detection with LiDAR and 4D radar. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 8765–8774. [Google Scholar]
- Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, A.; Kato, S.; Takeda, K. LIBRE: The multiple 3D LiDAR dataset. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1094–1101. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. (Neurips) 2017, 30, 5099–5108. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Miao, Y.; Zhang, L.; Wang, Q.; Chen, H. An efficient point cloud semantic segmentation network based on multiscale super-patch transformer. Sci. Rep. 2024, 14, 4567. [Google Scholar] [CrossRef]
- Chen, Z.; Xu, W.; Zhang, X.; Liu, Y. PointDC: Unsupervised semantic segmentation of 3D point clouds via cross-modal distillation and super-voxel clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–3 October 2023; pp. 8432–8441. [Google Scholar]
- Thomas, H.; Qi, C.R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. KPConv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6411–6420. [Google Scholar]
- Yan; Xu, C.; Cui, Z.; Zong, Y.; Yang, J. STPC: Spatially Transformed Point Convolution for 3D Point Cloud Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Wu, B.; Wan, A.; Yue, X.; Keutzer, K. SqueezeSeg: Convolutional neural networks with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1887–1893. [Google Scholar]
- Zhang, Y.; Zhou, Z.; David, P.; Yue, X.; Xi, Z.; Gong, B.; Foroosh, H. PolarNet: An improved grid representation for online LiDAR point clouds semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9601–9610. [Google Scholar]
- Xu, C.; Wu, B.; Wang, Z.; Zhan, W.; Vajda, P.; Keutzer, K.; Tomizuka, M. Spatially Adaptive Convolution for Efficient 3D Point Cloud Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Wang, L.; Huang, Y.; Hou, Y.; Zhang, S.; Shan, J. Graph Attention Convolution for Point Cloud Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Wang, L.; Xu, C.; Zhao, Y.; Zhang, M. UniPre3D: Unified pre-training of 3D point cloud models with cross-modal fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 15678–15687. [Google Scholar]
- Lin, H.; Chen, S.; Wang, M.; Liu, J. HiLoTs: High-low temporal sensitive representation learning for semi-supervised LiDAR segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 18234–18243. [Google Scholar]
- Choy, C.; Gwak, J.; Savarese, S. 4D spatio-temporal ConvNets: Minkowski convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3075–3084. [Google Scholar]
- Liu, Z.; Tang, H.; Zhao, S.; Shao, K.; Han, S. Point-Voxel Neural Architecture Search for 3D Deep Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Zhu, X.; Zhou, H.; Wang, T.; Hong, F.; Ma, Y.; Li, W.; Li, H.; Lin, D. Cylindrical and asymmetrical 3D convolution networks for LiDAR segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 9939–9948. [Google Scholar]
- Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. PointCNN: Convolution on X-transformed points. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, 8–14 December 2019; pp. 820–830. [Google Scholar]
- Wu, W.; Qi, Z.; Fuxin, L. PointConv: Deep convolutional networks on 3D point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 9621–9630. [Google Scholar]
- Eldar, Y.; Lindenbaum, M.; Porat, M.; Zeevi, Y.Y. The farthest point strategy for progressive image sampling. IEEE Trans. Image Process. 1997, 6, 1305–1315. [Google Scholar] [CrossRef] [PubMed]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Zhao, X.; Wen, C.; Wang, Y.; Bai, H.; Dou, W. TripleMixer: A 3D point cloud denoising model for adverse weather. arXiv 2024, arXiv:2408.13802. [Google Scholar] [CrossRef]
- Raisuddin, A.M.; Cortinhal, T.; Holmblad, J.; Aksoy, E.E. Learning to denoise raw mobile LiDAR data for robust localization. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Jeju Island, Republic of Korea, 2–5 June 2024; pp. 2862–2868. [Google Scholar]
- Charron, N.; Phillips, S.; Waslander, S.L. De-noising of lidar point clouds corrupted by snowfall. In Proceedings of the 2018 15th Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 8–10 May 2018; pp. 254–261. [Google Scholar]
- Cortinhal, T.; Tzelepis, G.; Aksoy, E.E. SalsaNext: Fast, uncertainty-aware semantic segmentation of LiDAR point clouds. In Advances in Visual Computing; Springer: Berlin/Heidelberg, Germany, 2020; pp. 207–222. [Google Scholar]
- Heinzler, R.; Piewak, F.; Schindler, P.; Stork, W. Cnn-based lidar point cloud de-noising in adverse weather. IEEE Robot. Autom. Lett. 2020, 5, 2514–2521. [Google Scholar] [CrossRef]
- Seppänen, A.; Ojala, R.; Tammi, K. 4DenoiseNet: Adverse weather denoising from adjacent point clouds. IEEE Robot. Autom. Lett. 2022, 8, 456–463. [Google Scholar] [CrossRef]
Snow Scenarios | Rain Scenarios | Fog Scenarios | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Method | P | R | F1 | mIoU | P | R | F1 | mIoU | P | R | F1 | mIoU |
DROR [47] | 40.21 | 95.24 | 56.55 | 39.42 | 53.72 | 97.52 | 69.28 | 53.00 | 41.25 | 92.55 | 57.06 | 39.92 |
SalsaNext [48] | 94.02 | 92.78 | 93.39 | 87.61 | 95.88 | 89.95 | 92.82 | 86.60 | 94.39 | 88.45 | 91.32 | 84.82 |
4DenoiseNet [50] | 90.15 | 93.48 | 91.78 | 84.82 | 89.98 | 92.54 | 91.78 | 84.82 | 88.78 | 93.01 | 90.84 | 83.23 |
WeatherNet [49] | 86.22 | 94.21 | 90.04 | 81.88 | 79.87 | 94.44 | 86.55 | 76.28 | 76.88 | 89.45 | 82.69 | 70.49 |
SCOPE (Ours) | 94.44 | 93.54 | 93.99 | 88.66 | 96.51 | 95.52 | 96.01 | 92.33 | 95.26 | 92.87 | 94.05 | 88.77 |
Snow Scenarios | Rain Scenarios | Fog Scenarios | ||||||
---|---|---|---|---|---|---|---|---|
, , | , , | F1 | mIoU | F1 | mIoU | F1 | mIoU | |
✓ | 92.05 | 86.09 | 95.29 | 90.70 | 92.66 | 86.90 | ||
✓ | ✓ | 93.61 | 88.42 | 95.88 | 92.02 | 93.83 | 88.07 | |
✓ | ✓ | ✓ | 93.99 | 88.66 | 96.01 | 92.33 | 94.05 | 88.77 |
Snow Scenarios | Rain Scenarios | Fog Scenarios | ||||
---|---|---|---|---|---|---|
No. of K | F1 | mIoU | F1 | mIoU | F1 | mIoU |
4 | 91.09 | 85.78 | 91.11 | 89.70 | 90.79 | 82.80 |
8 | 92.55 | 86.75 | 94.67 | 90.48 | 93.41 | 85.18 |
16 | 93.99 | 88.66 | 96.01 | 92.33 | 94.05 | 88.77 |
32 | 93.85 | 88.71 | 96.00 | 92.30 | 94.06 | 88.62 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, H.-G. SCOPE: Spatial Context-Aware Pointcloud Encoder for Denoising Under the Adverse Weather Conditions. Appl. Sci. 2025, 15, 10113. https://doi.org/10.3390/app151810113
Kim H-G. SCOPE: Spatial Context-Aware Pointcloud Encoder for Denoising Under the Adverse Weather Conditions. Applied Sciences. 2025; 15(18):10113. https://doi.org/10.3390/app151810113
Chicago/Turabian StyleKim, Hyeong-Geun. 2025. "SCOPE: Spatial Context-Aware Pointcloud Encoder for Denoising Under the Adverse Weather Conditions" Applied Sciences 15, no. 18: 10113. https://doi.org/10.3390/app151810113
APA StyleKim, H.-G. (2025). SCOPE: Spatial Context-Aware Pointcloud Encoder for Denoising Under the Adverse Weather Conditions. Applied Sciences, 15(18), 10113. https://doi.org/10.3390/app151810113