Next Article in Journal
Mapping the Complicated Relationship Between a Temperature Field and Cable Tension by Using Composite Deep Networks and Real Data with Additional Geometric Information
Previous Article in Journal
Interface Design of VR Driverless Vehicle System on User-Prioritized Experience Requirements
Previous Article in Special Issue
Multi-Task Trajectory Prediction Using a Vehicle-Lane Disentangled Conditional Variational Autoencoder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

RaSS: 4D mm-Wave Radar Point Cloud Semantic Segmentation with Cross-Modal Knowledge Distillation

1
College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
2
Zhejiang Provincial Key Laboratory of Mutil-Modal Communication Networks and Intelligent Information Processing, Zhejiang University, Hangzhou 310027, China
3
ChinaNorth Artificial Intelligence & Innovation Research Institute, Beijing 100072, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(17), 5345; https://doi.org/10.3390/s25175345
Submission received: 21 July 2025 / Revised: 15 August 2025 / Accepted: 27 August 2025 / Published: 28 August 2025
(This article belongs to the Special Issue AI-Driven Sensor Technologies for Next-Generation Electric Vehicles)

Abstract

Environmental perception is an essential task for autonomous driving, which is typically based on LiDAR or camera sensors. In recent years, 4D mm-Wave radar, which acquires 3D point cloud together with point-wise Doppler velocities, has drawn substantial attention owing to its robust performance under adverse weather conditions. Nonetheless, due to the high sparsity and substantial noise inherent in radar measurements, most radar perception studies are limited to object-level tasks, with point-level tasks such as semantic segmentation remaining largely underexplored. This paper aims to explore the possibility of using 4D radar in semantic segmentation. We set up the ZJUSSet dataset containing accurate point-wise class labels for radar and LiDAR. Then we propose a cross-modal distillation framework RaSS to fulfill the task. An adaptive Doppler compensation module is also designed to facilitate the segmentation. Experimental results on ZJUSSet and VoD dataset demonstrate that our RaSS model significantly outperforms the baselines and competitors. Code and dataset will be available upon paper acceptance.
Keywords: radar; semantic segmentation; knowledge distillation radar; semantic segmentation; knowledge distillation

Share and Cite

MDPI and ACS Style

Zhang, C.; Xiang, Z.; Xu, R.; Shan, H.; Zhao, X.; Dang, R. RaSS: 4D mm-Wave Radar Point Cloud Semantic Segmentation with Cross-Modal Knowledge Distillation. Sensors 2025, 25, 5345. https://doi.org/10.3390/s25175345

AMA Style

Zhang C, Xiang Z, Xu R, Shan H, Zhao X, Dang R. RaSS: 4D mm-Wave Radar Point Cloud Semantic Segmentation with Cross-Modal Knowledge Distillation. Sensors. 2025; 25(17):5345. https://doi.org/10.3390/s25175345

Chicago/Turabian Style

Zhang, Chenwei, Zhiyu Xiang, Ruoyu Xu, Hangguan Shan, Xijun Zhao, and Ruina Dang. 2025. "RaSS: 4D mm-Wave Radar Point Cloud Semantic Segmentation with Cross-Modal Knowledge Distillation" Sensors 25, no. 17: 5345. https://doi.org/10.3390/s25175345

APA Style

Zhang, C., Xiang, Z., Xu, R., Shan, H., Zhao, X., & Dang, R. (2025). RaSS: 4D mm-Wave Radar Point Cloud Semantic Segmentation with Cross-Modal Knowledge Distillation. Sensors, 25(17), 5345. https://doi.org/10.3390/s25175345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop