REHEARSE-3D: A Multi-Modal Emulated Rain Dataset for 3D Point Cloud De-Raining
Abstract
1. Introduction
- We introduce a new large-scale multi-modal emulated rain dataset, named REHEARSE-3D, with 9.2 billion point annotations, logged in various rain intensities in daytime and nighttime conditions in a controlled weather environment.
- Leveraging REHEARSE-3D, we benchmark various state-of-the-art denoising algorithms to de-rain the early-fused LiDAR and RADAR point clouds.
- We use the point cloud from clean weather and statistically generate synthetic raindrops to study the emulated-to-simulated domain gap.
2. Related Work
3. The REHEARSE-3D Dataset
3.1. Background
3.2. Data Annotation
- The unlabeled raw data is extracted from the dataset and stored in x, y, z, and intensity formats, resulting in 143 sequences containing about 300 dense LiDAR point clouds per sequence.
- We then estimate the road plane using the RANSAC algorithm and keep it for later usage.
- We create bounding boxes for the rain sprinklers and label them accordingly. Points that are below the estimated road plane are omitted.
- Next, we create bounding boxes for the other objects on the road surface (e.g., pedestrians, bicyclists, cars, target objects) and label them accordingly.
- A 2D polygon bounding box for the road is then created.
- Any points above the road plane, within the 2D polygon bounding box, that do not belong to the previously labeled classes are then labeled as raindrops.
- The remaining points on the 2D polygon are further labeled as a part of the road.
- All the remaining points are labeled as background.
- Since the object bounding boxes may vary slightly from sequence to sequence due to noise in the sensor readings, we manually correct the bounding boxes for each sequence and repeat the process from steps 2 to 8.
- The corresponding 4D RADAR sparse point clouds are then automatically annotated by transferring labels from the nearest LiDAR points, utilizing a nearest-neighbor threshold of k = 5.
3.3. Data Statistics
3.4. Data Illustration
3.5. Rain Simulation
| Algorithm 1 Pre-processing to retrieve unreturned LiDAR beams. Inputs: (3D Cartesian coordinates), (intensity values), and (calibrated elevation and azimuth angles), (maximum sensor range). |
|
4. Benchmark on 3D Point Cloud De-Raining
4.1. Benchmark Setup
4.2. Benchmark Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ahmed, M.M.; Ghasemzadeh, A. The impacts of heavy rain on speed and headway Behaviors: An investigation using the SHRP2 naturalistic driving study data. Transp. Res. Part C Emerg. Technol. 2018, 91, 371–384. [Google Scholar] [CrossRef]
- Zhao, X.; Wen, C.; Wang, Y.; Bai, H.; Dou, W. TripleMixer: A 3D Point Cloud Denoising Model for Adverse Weather. arXiv 2024, arXiv:2408.13802. [Google Scholar] [CrossRef]
- Raisuddin, A.M.; Cortinhal, T.; Holmblad, J.; Aksoy, E.E. 3D-OutDet: A Fast and Memory Efficient Outlier Detector for 3D LiDAR Point Clouds in Adverse Weather. In 2024 IEEE Intelligent Vehicles Symposium (IV); IEEE: Piscataway, NJ, USA, 2024; pp. 2862–2868. [Google Scholar]
- Piroli, A.; Dallabetta, V.; Kopp, J.; Walessa, M.; Meissner, D.; Dietmayer, K. Energy-based Detection of Adverse Weather Effects in LiDAR Data. IEEE Robot. Autom. Lett. 2023, 8, 4322–4329. [Google Scholar] [CrossRef]
- Heinzler, R.; Piewak, F.; Schindler, P.; Stork, W. Cnn-based lidar point cloud de-noising in adverse weather. IEEE Robot. Autom. Lett. 2020, 5, 2514–2521. [Google Scholar] [CrossRef]
- Zhang, C.; Huang, Z.; Guo, H.; Qin, L.; Ang, M.H.; Rus, D. Smart-rain: A degradation evaluation dataset for autonomous driving in rain. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); IEEE: Piscataway, NJ, USA, 2023; pp. 9691–9698. [Google Scholar]
- Kurup, A.; Bos, J. Dsor: A scalable statistical filter for removing falling snow from lidar point clouds in severe winter weather. arXiv 2021, arXiv:2109.07078. [Google Scholar] [CrossRef]
- Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Gall, J.; Stachniss, C. Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset. Int. J. Robot. Res. 2021, 40, 959–967. [Google Scholar] [CrossRef]
- Seppanen, A.; Ojala, R.; Tammi, K. 4DenoiseNet: Adverse Weather Denoising from Adjacent Point Clouds. IEEE Robot. Autom. Lett. 2022, 8, 456–463. [Google Scholar] [CrossRef]
- Poledna, Y.; Drechsler, M.F.; Donzella, V.; Chan, P.H.; Duthon, P.; Huber, W. REHEARSE: AdveRse wEatHEr datAset for sensoRy noiSe modEls. In 2024 IEEE Intelligent Vehicles Symposium (IV); IEEE: Piscataway, NJ, USA, 2024; pp. 2451–2457. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2020; pp. 11621–11631. [Google Scholar]
- Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2020; pp. 2446–2454. [Google Scholar]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2020; pp. 2636–2645. [Google Scholar]
- Armanious, K.; Quach, M.; Ulrich, M.; Winterling, T.; Friesen, J.; Braun, S.; Jenet, D.; Feldman, Y.; Kosman, E.; Rapp, P.; et al. Bosch Street Dataset: A Multi-Modal Dataset with Imaging Radar for Automated Driving. arXiv 2024, arXiv:2407.12803. [Google Scholar]
- Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
- Li, H.; Li, Y.; Wang, H.; Zeng, J.; Cai, P.; Xu, H.; Lin, D.; Yan, J.; Xu, F.; Xiong, L.; et al. Open-sourced data ecosystem in autonomous driving: The present and future. arXiv 2023, arXiv:2312.03408. [Google Scholar]
- Xiao, A.; Huang, J.; Xuan, W.; Ren, R.; Liu, K.; Guan, D.; El Saddik, A.; Lu, S.; Xing, E.P. 3d semantic segmentation in the wild: Learning generalized models for adverse-condition point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2023; pp. 9382–9392. [Google Scholar]
- Gadd, M.; De Martini, D.; Bartlett, O.; Murcutt, P.; Towlson, M.; Widojo, M.; Muşat, V.; Robinson, L.; Panagiotaki, E.; Pramatarov, G.; et al. OORD: The Oxford Offroad Radar Dataset. arXiv 2024, arXiv:2403.02845. [Google Scholar] [CrossRef]
- Pitropov, M.; Garcia, D.E.; Rebello, J.; Smart, M.; Wang, C.; Czarnecki, K.; Waslander, S. Canadian adverse driving conditions dataset. Int. J. Robot. Res. 2021, 40, 681–690. [Google Scholar] [CrossRef]
- Kent, D.; Alyaqoub, M.; Lu, X.; Khatounabadi, H.; Sung, K.; Scheller, C.; Dalat, A.; bin Thabit, A.; Whitley, R.; Radha, H. MSU-4S-The Michigan State University Four Seasons Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2024; pp. 22658–22667. [Google Scholar]
- Fent, F.; Kuttenreich, F.; Ruch, F.; Rizwin, F.; Juergens, S.; Lechermann, L.; Nissler, C.; Perl, A.; Voll, U.; Yan, M.; et al. MAN TruckScenes: A multimodal dataset for autonomous trucking in diverse conditions. arXiv 2024, arXiv:2407.07462. [Google Scholar] [CrossRef]
- Brödermann, T.; Bruggemann, D.; Sakaridis, C.; Ta, K.; Liagouris, O.; Corkill, J.; Van Gool, L. MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty. arXiv 2024, arXiv:2401.12761. [Google Scholar] [CrossRef]
- Kong, L.; Liu, Y.; Li, X.; Chen, R.; Zhang, W.; Ren, J.; Pan, L.; Chen, K.; Liu, Z. Robo3d: Towards robust and reliable 3d perception against corruptions. In Proceedings of the IEEE/CVF International Conference on Computer Vision; IEEE: Piscataway, NJ, USA, 2023; pp. 19994–20006. [Google Scholar]
- Beemelmanns, T.; Zhang, Q.; Geller, C.; Eckstein, L. Multicorrupt: A multi-modal robustness dataset and benchmark of lidar-camera fusion for 3d object detection. In 2024 IEEE Intelligent Vehicles Symposium (IV); IEEE: Piscataway, NJ, USA, 2024; pp. 3255–3261. [Google Scholar]
- Kong, L.; Xie, S.; Hu, H.; Niu, Y.; Ooi, W.T.; Cottereau, B.R.; Ng, L.X.; Ma, Y.; Zhang, W.; Pan, L.; et al. The robodrive challenge: Drive anytime anywhere in any condition. arXiv 2024, arXiv:2405.08816. [Google Scholar] [CrossRef]
- Neto, L.N.; Reway, F.; Poledna, Y.; Drechsler, M.F.; Ribeiro, E.P.; Huber, W.; Icking, C. TWICE Dataset: Digital Twin of Test Scenarios in a Controlled Environment. arXiv 2023, arXiv:2310.03895. [Google Scholar] [CrossRef]
- Linnhoff, C.; Elster, L.; Rosenberger, P.; Winner, H. Road Spray in Lidar and Radar Data for Individual Moving Objects. 2022. Available online: https://tudatalib.ulb.tu-darmstadt.de/items/2894756f-1316-4a4e-a25e-92db5f0d5b8e (accessed on 20 January 2026).
- Espineira, J.P.; Robinson, J.; Groenewald, J.; Chan, P.H.; Donzella, V. Realistic LiDAR With Noise Model for Real-Time Testing of Automated Vehicles in a Virtual Environment. IEEE Sens. J. 2021, 21, 9919–9926. [Google Scholar] [CrossRef]
- Rusu, R.B.; Cousins, S. 3d is here: Point cloud library (pcl). In 2011 IEEE International Conference on Robotics and Automation; IEEE: Piscataway, NJ, USA, 2011; pp. 1–4. [Google Scholar]
- Charron, N.; Phillips, S.; Waslander, S.L. De-noising of lidar point clouds corrupted by snowfall. In 2018 15th Conference on Computer and Robot Vision (CRV); IEEE: Piscataway, NJ, USA, 2018; pp. 254–261. [Google Scholar]
- Cortinhal, T.; Tzelepis, G.; Erdal Aksoy, E. SalsaNext: Fast, uncertainty-aware semantic segmentation of LiDAR point clouds. In International Symposium on Visual Computing; Springer: Cham, Switzerland, 2020; pp. 207–222. [Google Scholar]
- Yu, M.Y.; Vasudevan, R.; Johnson-Roberson, M. LiSnowNet: Real-time Snow Removal for LiDAR Point Clouds. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); IEEE: Piscataway, NJ, USA, 2022; pp. 6820–6826. [Google Scholar]
- Berman, M.; Triki, A.R.; Blaschko, M.B. The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2018; pp. 4413–4421. [Google Scholar]








| #Points | #Classes | Modality | Annotation | Sequential | Weather | Rain Characteristics | Day/Night | Environment | |
|---|---|---|---|---|---|---|---|---|---|
| SemanticKITTI [8] | 4549 | 28 | LiDAR-64 | Point-wise | ✓ | Clean | - | - | Real |
| SnowyKITTI [9] | 3940 | 2 | LiDAR-64 | Point-wise | ✓ | Snow | - | - | Simulated |
| WADS [7] | 387 | 22 | LiDAR-64 | Point-wise | ✓ | Snow | - | - | Real |
| SemanticSpray [4] | 526 | 3 | LiDAR-32 | Point-wise | ✓ | Rain | - | - | Real |
| WeatherNet [5] | 1700 | 3 | LiDAR-32 | Point-wise | ✓ | Rain/Fog | ✓ | - | Real |
| REHEARSE-3D (Ours) | 9200 | 8 | LiDAR-256 ★ and 4D RADAR | Point-wise | ✓ | Rain/Clean | ✓ | ✓ | Real |
| Validation | Test | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Model | Type | Precision ↑ | Recall ↑ | F1 ↑ | IoU ↑ | Precision ↑ | Recall ↑ | F1 ↑ | IoU ↑ | Time (ms) ↓ |
| 3D-OutDet [3] | Deep Learning—S | 96.35 | 98.48 | 97.40 | 94.94 | 97.23 | 96.61 | 96.92 | 94.03 | 82 |
| SalsaNext [32] | Deep Learning—S | 95.17 | 99.41 | 97.24 | 94.61 | 95.81 | 99.07 | 97.41 | 94.92 | 97 |
| LiSnowNet-L1 [33] | Deep Learning—US | 14.42 | 31.94 | 19.55 | 12.44 | 21.22 | 37.70 | 26.07 | 16.86 | 131 |
| DSOR [7] | Statistical—US | 15.50 | 38.18 | 22.05 | 12.39 | 25.73 | 49.33 | 33.82 | 20.35 | 253 |
| DROR [31] | Statistical—US | 5.88 | 76.32 | 10.91 | 5.77 | 10.78 | 73.48 | 18.81 | 10.38 | 199 |
| WMG Simulated | REHEARSE-3D | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Model | Rain Density | Precision ↑ | Recall ↑ | F1 ↑ | IoU ↑ | Precision ↑ | Recall ↑ | F1 ↑ | IoU ↑ |
| 3D-OutDet | Heavy | 99.99 | 99.99 | 99.99 | 99.99 | 98.30 | 96.43 | 97.35 | 94.85 |
| Medium | 99.99 | 99.99 | 99.99 | 99.99 | 97.88 | 95.89 | 96.87 | 93.93 | |
| Light | 99.99 | 99.99 | 99.99 | 99.99 | 98.08 | 98.67 | 98.38 | 96.81 | |
| DSOR | Heavy | 78.58 | 99.59 | 87.85 | 78.33 | 27.08 | 39.46 | 32.12 | 19.13 |
| Medium | 69.00 | 99.81 | 91.59 | 68.91 | 38.24 | 56.27 | 45.54 | 29.48 | |
| Light | 55.03 | 94.34 | 70.97 | 55.00 | 23.35 | 35.95 | 28.31 | 16.49 | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Raisuddin, A.M.; Holmblad, J.; Haghighi, H.; Poledna, Y.; Drechsler, M.F.; Donzella, V.; Aksoy, E.E. REHEARSE-3D: A Multi-Modal Emulated Rain Dataset for 3D Point Cloud De-Raining. Sensors 2026, 26, 728. https://doi.org/10.3390/s26020728
Raisuddin AM, Holmblad J, Haghighi H, Poledna Y, Drechsler MF, Donzella V, Aksoy EE. REHEARSE-3D: A Multi-Modal Emulated Rain Dataset for 3D Point Cloud De-Raining. Sensors. 2026; 26(2):728. https://doi.org/10.3390/s26020728
Chicago/Turabian StyleRaisuddin, Abu Mohammed, Jesper Holmblad, Hamed Haghighi, Yuri Poledna, Maikol Funk Drechsler, Valentina Donzella, and Eren Erdal Aksoy. 2026. "REHEARSE-3D: A Multi-Modal Emulated Rain Dataset for 3D Point Cloud De-Raining" Sensors 26, no. 2: 728. https://doi.org/10.3390/s26020728
APA StyleRaisuddin, A. M., Holmblad, J., Haghighi, H., Poledna, Y., Drechsler, M. F., Donzella, V., & Aksoy, E. E. (2026). REHEARSE-3D: A Multi-Modal Emulated Rain Dataset for 3D Point Cloud De-Raining. Sensors, 26(2), 728. https://doi.org/10.3390/s26020728

