Next Article in Journal
Optimizing CO2 Concentrations and Emissions Based on the WRF-Chem Model Integrated with the 3DVAR and EAKF Methods
Next Article in Special Issue
Physically Consistent Radar High-Resolution Range Profile Generation via Spectral-Aware Diffusion for Robust Automatic Target Recognition Under Data Scarcity
Previous Article in Journal
Interacting Factors Controlling Total Suspended Matter Dynamics and Transport Mechanisms in a Major River-Estuary System
Previous Article in Special Issue
A 2D-CFAR Target Detection Method in Sea Clutter Based on Copula Theory Using Dual-Observation Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

KuRALS: Ku-Band Radar Datasets for Multi-Scene Long-Range Surveillance with Baselines and Loss Design

1
Intelligent Science and Technology Academy, China Aerospace Science and Industry Corporation, Beijing 100043, China
2
Shenzhen International Graduate School, Tsinghua University, Shenzhen 518000, China
3
Alibaba (China) Network Technology Co., Ltd., Hangzhou 310000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(1), 173; https://doi.org/10.3390/rs18010173
Submission received: 14 September 2025 / Revised: 29 December 2025 / Accepted: 29 December 2025 / Published: 5 January 2026

Highlights

What are the main findings?
  • KuRALS is a radar surveillance dataset covering aerial, land and maritime scenarios, with two subsets (KuRALS-CW and KuRALS-PD) collected by long-range Ku-band radars.
  • A lightweight baseline model (KuRALS-Net) and novel noisy-background suppression loss functions significantly improve radar semantic segmentation performance under interference.
What are the implications of the main findings?
  • KuRALS enables investigation and fair comparison of deep learning algorithms for long-range radar surveillance, facilitating progress beyond autonomous driving applications.
  • The proposed methods enhance the robustness and reliability of radar-based monitoring across diverse real-world environments.

Abstract

Compared to cameras and LiDAR, radar provides superior robustness under adverse conditions, as well as extended sensing range and inherent velocity measurement, making it critical for surveillance applications. To advance research in deep learning-based radar perception technology, several radar datasets have been publicly released. However, most of these datasets are designed for autonomous driving applications, and existing radar surveillance datasets suffer from limited scene and target diversity. To address this gap, we introduce KuRALS, a range–Doppler (RD)-level radar surveillance dataset designed for learning-based long-range detection of moving targets. The dataset covers aerial (unmanned aerial vehicles), land (pedestrians and cars) and maritime (boats) scenarios. KuRALS is real-measured by two Kurz-under (Ku) band radars and contains two subsets (KuRALS-CW and KuRALS-PD). It consists of RD spectrograms with pixel-wise annotations of categories, velocity and range coordinates, and the azimuth and elevation angles are also provided. To benchmark performance, we develop a lightweight radar semantic segmentation (RSS) baseline model and further investigate various perception modules within this framework. In addition, we propose a novel interference-suppression loss function to enhance robustness against background interference. Extensive experimental results demonstrate that our proposed solution significantly outperforms existing approaches, with improvements of 10.0 % in mIoU on the KuRALS-CW dataset and 9.4 % on the KuRALS-PD dataset.

1. Introduction

As an active detection device, radar demonstrates greater robustness compared to cameras and LiDAR in extreme weather conditions (such as rain, snow and fog) and challenging lighting environments (e.g., excessively bright or low light). In addition, it inherently offers long-range detection capabilities and provides both the location and radial velocity of targets. Due to these advantages, radar has been extensively used in systems such as Advanced Driver Assistance Systems (ADASs) and surveillance systems.
With advances in deep learning technologies, radar-based target perception has gained increasing attention in autonomous driving [1,2,3,4,5,6], but its development in the radar surveillance field [7,8,9,10] has been much slower. This gap can be primarily attributed to two factors: (i) Deep learning algorithms rely on large-scale datasets for data-driven optimization. Yet, due to confidentiality concerns, publicly available radar surveillance datasets remain scarce. Moreover, existing datasets [11,12,13,14] suffer from limited scene and target diversity and a narrow detection range, which restrict their applicability in complex monitoring scenarios. (ii) Radar surveillance tasks differ substantially from autonomous driving, making detection algorithms trained on autonomous driving datasets [1,15,16,17] unsuitable for direct deployment in surveillance applications. To address these issues, it is essential to collect long-range surveillance radar datasets that cover various monitoring scenarios, along with the development of effective target surveillance algorithms.
To this end, we construct KuRALS, a radar surveillance dataset collected with commonly deployed Kurz-under (Ku) band (12–18 GHz) surveillance radars. To ensure broad applicability, the dataset covers aerial, land and sea scenarios across both urban and rural environments. The observed targets include typical objects of interest in these settings, such as unmanned aerial vehicles (UAVs), pedestrians and cars on the ground and boats on the water, as illustrated in Figure 1a. Since estimating target distance, velocity and category is far more challenging than directly deriving azimuth and elevation from radar directional beam angles, KuRALS provides 2D range–Doppler (RD) spectrograms, where range and Doppler dimensions correspond to radial distance and velocity, respectively. These spectrograms are processed from raw radar echoes and serve as the input to target detection algorithms for distance, velocity and category estimation. To monitor the targets across a wide range of distances and velocities, we employ long-range radars (≤6.4 km) and test two widely used operating modes: continuous-wave (CW) and pulse-Doppler (PD). Specifically, the CW radar eliminates range blind zones and provides higher velocity resolution, making it suitable for continuous tracking and motion analysis of targets. In contrast, the PD radar offers a higher signal-to-noise ratio (SNR) and stronger target detection capability, which is advantageous in cluttered or long-distance scenes. Since both CW and PD radars are widely used in real-world surveillance systems [11,12,13,14], including both modalities enables us to evaluate model generalization under diverse sensing conditions and to support potential studies on cross-domain transfer. For detection algorithm optimization and validation, KuRALS provides high-quality annotations obtained through automated labeling followed by algorithmic correction. The aforementioned process pipeline is shown in Figure 1b.
Building on the proposed dataset, we further investigate effective detection algorithms for radar surveillance. Conventional solutions typically apply the constant false alarm rate (CFAR) algorithm [18] to RD spectrograms for estimating target distance and velocity. However, such methods suffer from critical issues, including heavy reliance on handcrafted designs, high false alarm rates and difficulty in identifying target categories. To address these issues, we advocate data-driven deep learning-based solutions that extract target information from RD data. Specifically, we first establish a radar semantic segmentation (RSS) baseline model that takes RD spectrograms as input and outputs pixel-level segmentation results, enabling precise distance, velocity and category estimation. Building upon this baseline, we further explore radar detection algorithms from multiple perspectives, including model architecture, loss function design and comprehensive performance evaluation, ultimately providing effective solutions for radar-based target surveillance.
Our main contributions are summarized as follows:
  • We introduce KuRALS, an RD-level multi-scene and multi-target radar surveillance dataset. The dataset is collected using multiple Ku-band radars operating in CW or PD mode, thereby consisting of two subsets, KuRALS-CW and KuRALS-PD. It covers diverse surveillance scenarios including aerial UAV monitoring, ground-based pedestrian and car observation, and maritime ship detection. Additionally, it provides high-quality annotations with target 3D coordinates, velocity and category information, enabling research across various long-range surveillance applications.
  • We develop KuRALS-Net, a lightweight RSS baseline model that achieves a superior balance between accuracy and parameter efficiency compared with both classical image segmentation networks and advanced RSS models.
  • We propose two novel noisy-background suppression (NBS) loss functions, which substantially enhance the robustness of RSS models under strong background interference, and conduct an in-depth analysis of their performance.
  • We perform comprehensive evaluations of various segmentation models, perception modules and loss functions on KuRALS, establishing a replicable benchmark that facilitates fair comparison and accelerates research in long-range radar-based surveillance.
The rest of this paper is structured as follows. Section 2 reviews related works on radar datasets and radar semantic segmentation. Section 3 details the proposed KuRALS dataset, including the data processing pipeline and dataset distribution. Section 4 introduces the proposed target segmentation algorithms, comprising a baseline model and novel loss functions. Section 5 reports experimental results and analyses that demonstrate the effectiveness of our approach. Finally, Section 6 concludes this paper and outlines potential directions for future research.

2. Related Work

2.1. Radar Datasets

Large-scale radar datasets play a vital role in advancing deep learning-based radar perception [15,19,20]. However, perception tasks vary significantly across different application scenarios, leading to distinct preferences for radar data. Most public radar datasets are tailored for autonomous driving [15,17,19,21,22,23,24,25], where the focus lies on road targets such as vehicles and pedestrians. These datasets typically employ 76–81 GHz frequency modulated continuous-wave (FMCW) radars with a maximum detection range of up to 200 m, aiming to provide near-field, high-resolution sensing for collision avoidance. They often utilize 3D range–azimuth–Doppler (RAD) maps [15] or even 4D radar point clouds [17,24,25] to support fine-grained localization and motion estimation. In contrast, radar surveillance tasks differ from autonomous driving in three key aspects: (i) Target and scene diversity. Surveillance systems monitor aerial, ground and maritime targets, far beyond road-centric objects. (ii) Detection range requirements. Rather than near-field sensing for collision avoidance, surveillance tasks demand long-range early warning, probably extending over several kilometers. (iii) Data modality. Instead of RAD maps or 4D point clouds, surveillance algorithms often rely on RD spectrograms for distance and velocity estimation, as azimuth and elevation can be directly derived from directional beam patterns. These fundamental differences make autonomous driving datasets unsuitable for advancing radar surveillance research, highlighting the need for dedicated surveillance datasets.
To this end, several radar surveillance datasets have been introduced. LSS-FMCWR-1.0 [11] and DTDAT [12] are designed for UAV monitoring. LSS-FMCWR-1.0 aims to build near-range UAV classification models, observing hovering UAVs within 10 m and using Ku-band and L-band FMCW radars to generate RD maps for classification. DTDAT collects RD data as well and focuses on aerial UAV tracking, with a broader dynamic observation range of up to 600 m using a Kurz-above (Ka) band PD radar. IPIX [13] and SDRDSP [14] concentrate on maritime target observation and both collect RD maps using X-band PD radars. IPIX focuses on the observation of floating styrofoam blocks and small boats, with a dynamic detection range of up to 900 m and a distance resolution of 30 m. SDRDSP extends its observation range to monitor floating buoys and boats. Compared with these datasets, our proposed KuRALS dataset spans ground, aerial and maritime scenarios, emphasizing moving objects rather than static ones. It provides surveillance coverage up to 6.4 km, enabling long-range early warning across diverse targets. These characteristics make KuRALS better aligned with real-world surveillance needs and fill an essential gap in current radar perception research.

2.2. Radar Semantic Segmentation

Compared to bounding box-based object detection, RSS delivers category predictions at the pixel level, thereby offering finer-grained location and more accurate velocity estimation of targets. RSS has seen rapid development recently, yet most existing models have been validated exclusively in autonomous driving scenarios. MVRSS [2] benchmarks classical image segmentation networks, including FCN [26], U-Net [27] and DeepLabv3+ [28], on radar data. RSS-Net [29] extends this line of work with an encoder–decoder architecture enhanced by atrous spatial pyramid pooling (ASPP) [30] to capture multi-scale target information. TMVA-Net [2] further leverages multi-view data to achieve more accurate segmentation in autonomous driving contexts. More recently, Transformer-based approaches, such as TC-Radar [31], TransRSS [5] and TransRadar [6], adopt attention mechanisms to perform global information fusion. Meanwhile, PKC [32] and AdaPKC [33], motivated by classical CFAR methods [18,34], introduce peak convolution operators customized for radar signal perception. Despite these advances, the unique requirements of radar surveillance scenarios remain underexplored. Unlike autonomous driving, surveillance applications demand lightweight and efficient RSS models that can operate over long ranges and handle diverse targets across complex environments. Model training in this context also presents unique challenges: prior RSS works [2,32,33] typically rely on weighted cross-entropy (WCE) loss or Dice loss [35], yet such loss functions fail to sufficiently address optimization under strong background interference. These limitations underscore the necessity of revisiting model design and loss functions to ensure effectiveness and robustness in practical surveillance settings.

3. KuRALS Dataset

In this section, we first introduce the principle of generating RD data from radar echoes. We then describe the radar settings used during target observation, followed by the procedure for transforming raw echoes into RD maps and the subsequent preprocessing pipeline. Next, we detail the annotation process applied to the RD data, including distance, velocity and category labels. Finally, we provide an overview of the dataset distribution.

3.1. Generation Principle of Range–Doppler Map

We begin by introducing the principles of how the CW and PD radars are used to acquire the RD map data and highlighting their key differences. It should be noted that the adopted CW radar is a widely used FMCW radar. To collect radar data, radar sensors emit electromagnetic waves through transmitting antennas and receive target echoes through receiving antennas. By comparing the differences between the transmitted wave and the received echo, we can calculate the radial distance (range) and Doppler velocity of the target relative to the radar sensor, thereby generating RD map data.
The segment of the signal that radars transmit is called a chirp, where the frequency of the chirp linearly increases from the carrier frequency, and the relationship between modulated frequency and time is as follows:
f = f 0 + B T c t , where t [ 0 , T c ] ,
where f 0 is the carrier frequency, B is the signal bandwidth and T c is the chirp duration. The phase of the transmitted signal is expressed as
ϕ T ( t ) = 2 π ( f 0 t + 1 2 μ t 2 ) ,
where μ = B T c denotes the modulation slope. Assuming there is a target at a distance R, which moves towards the radar device with a velocity v, then the round-trip time for the transmitted signal is given by τ = 2 ( R v t ) c , where c is the speed of the wave and typically R v t . Then, the phase of the received signal is
ϕ R ( t ) = 2 π [ f 0 ( t τ ) + 1 2 μ ( t τ ) 2 ] .
To calculate the target distance, the PD radar measures the time delay τ of the received signal, with the range resolution proportional to the pulse width T c . To improve the resolution, the PD radar applies pulse compression to the received signal via matched filtering. On the contrary, the FMCW radar calculates the target distance by performing the fast Fourier transform (FFT) algorithm on the fast-time dimension (sampling time within the chirp). Specifically, the received signal is first multiplied by the transmitted signal, followed by low-pass filtering, which yields a so-called intermediate frequency (IF) signal. The IF signal can be simplified as a single frequency signal, with its phase given by
ϕ I F ( t ) = 2 π [ ( 2 μ R c 2 f 0 v c ) t + 2 f 0 R c ] .
The frequency of the IF signal consists of two components: the range-dependent frequency f r = 2 μ R c and the Doppler shift f d = 2 f 0 v c , where typically f r f d . By performing FFT on the IF signal, the target distance R can be estimated.
To measure target velocity, both FMCW and PD radars utilize multiple chirps. The IF signals from different chirps exhibit a phase variation related to the chirp index n and Doppler shift f d ,
φ [ n ] = 2 π f d n T r , where 0 n N 1 ,
where T r represents the time interval between consecutive transmitted chirps and the total number of chirps is N. By applying the FFT algorithm along the slow-time dimension (chirp index n), the radial velocity of the target can be calculated.
In summary, the RD data from FMCW radar is obtained through two cascaded FFTs, while that from PD radar is generated by a pulse compression operation along the fast-time dimension followed by an FFT along the slow-time dimension. Beyond the data processing pipeline, FMCW and PD radars also differ in their operating principles: (i) FMCW radar transmits and receives electromagnetic waves simultaneously, whereas PD radar alternates between transmission and reception. On the one hand, this design enables the PD radar to use fewer antennas and eliminate interference from the transmitted signal on the received signal, but on the other hand, it leads to a detection blind spot at close range for the PD radar. (ii) PD radar chirps are much narrower compared to those of FMCW radar, allowing PD radar to transmit higher-power electromagnetic waves. As a result, PD radar achieves stronger interference resistance and a higher signal-to-noise ratio in the received signals.

3.2. Radar Settings

We next illustrate the radar settings during raw radar echo acquisition. The parameter configurations of the employed CW radar and PD radar are summarized in Table 1 and Table 2, respectively. Notably, the PD radar utilizes pulse waves with different pulse widths to mitigate the near-field blind zone issue introduced in Section 3.1, which is further elaborated in the subsequent data processing section, i.e., Section 3.3. During target echo acquisition, the radar locations remain fixed. Both radars detect targets of interest in either a scanning search mode or a directional tracking mode. The CW radar employs electronic beam steering in the horizontal plane to observe targets and obtain azimuth information. Combined with the measured range, this yields two-dimensional target coordinates. In contrast, the PD radar adopts mechanical scanning in the horizontal direction and electronic scanning in the vertical direction. In this process, both the azimuth and elevation angles are directly derived from the directional beam orientations of the radar. Together with the range information, the PD radar provides three-dimensional target coordinates.

3.3. Range–Doppler Data Processing

Compared with acquiring azimuth and elevation angles, directly extracting distance, velocity and category information from raw echoes is substantially more challenging. To address this, we transform the radar echoes into RD spectrograms following the principles outlined in Section 3.1. This representation provides a more direct correspondence to the target’s distance and velocity information. To facilitate computer processing, we further take the magnitude of the complex RD map and store it in a 2D matrix format.
To further reduce the burden on detection algorithms in extracting target information from RD maps, we perform a series of preprocessing steps on them. Compared with CW radar, PD radar data requires more extensive preprocessing operations. Its overall preprocessing pipeline is illustrated in Figure 2, and the representative intermediate results are presented in Figure 3. Specifically, the PD radar collects both narrow and wide pulse wave data, with the narrow pulse wave used for near-field detection and the wide pulse wave for far-field detection, as illustrated in Table 3. To enable comprehensive target perception by the detection algorithms, we combine the two representations to form a unified single-frame PD radar data. Given that the signal strength of both the target and the interference differs between the two representations, we first normalize the narrow and wide pulse wave data before combining:
X ˜ = X max ( X ) ,
where X denotes the original radar map; max ( · ) calculates the maximum value of the entire map; and X ˜ is the normalized radar map. For the range-overlapping regions, we take the mean magnitude of the two signals to ensure smooth integration. To maintain consistent range dimensions across all data frames and facilitate model training, we zero-pad each PD radar data frame to a fixed length of 800, yielding 2D RD maps of size 800 × 64 . Moreover, to reduce noise interference, we apply zero frequency elimination to both CW and PD radar data. Specifically, for the CW radar maps, Doppler bins corresponding to velocities of 0.198 m / s , 0 m / s , 0.198 m / s and 0.396 m / s are removed, reducing the map size from 2048 × 128 to 2048 × 124 . For the PD radar maps, the Doppler bin corresponding to 0 m / s is set to zero, while all other bins remain unchanged. It is important to note that zero-frequency elimination for PD radar data is performed prior to the normalization of both narrow and wide pulse wave data. For the preprocessed CW radar data, the mapping relationships between range R CW and range bin r CW [ 1 , 2048 ] , as well as velocity V CW and Doppler bin d CW [ 1 , 124 ] , are expressed as follows:
R CW = 3.1128 × ( r CW 1 ) , V CW = 0.198 × ( d CW + 1 ) , if 1 d CW 62 0.198 × ( 127 d CW ) , if 63 d CW 124 .
Similarly, for the PD radar map after preprocessing, the mappings between physical parameters ( R PD and V PD ) and their corresponding bins ( r PD [ 1 , 800 ] and d PD [ 1 , 64 ] ) are defined as
R PD = 7.5 × ( r PD 1 ) , V PD = 1.769 × ( d PD 1 ) 56.61 .
Figure 4 and Figure 5 present the processed data frames of different target categories collected by the CW radar and the PD radar. The results clearly show that different targets exhibit distinct characteristics, and it can also be observed that the level of background interference in the PD radar data is noticeably lower compared to that in the CW radar data, which empirically validates the theoretical comparison between the two radars presented in Section 3.1.

3.4. Range–Doppler Data Annotation

To optimize and evaluate the ability of target detection algorithms to extract target information including distance, velocity and category from radar RD maps, we need to provide the true coordinates and category annotations of the targets in the maps, i.e., their true radial distance, Doppler velocity and category information.
For distance and velocity information, previous works [15] project object detection results from optical images onto the radar RD map to serve as target annotations. However, this indirect annotation approach is not applicable to our radar target monitoring scenarios. In these scenarios, the targets are at great distances, and the lighting and weather conditions are complex and highly variable, far exceeding the effective observation range of optical cameras. Moreover, there are inherent errors in both detecting targets from optical images and mapping their coordinates to the radar RD map. These challenges render the indirect annotation approach unreliable for providing accurate target annotations.
To address this, we propose a more reliable target annotation method that utilizes Global Positioning System (GPS) devices combined with algorithmic correction, as illustrated in Figure 6. Specifically, we mount a GPS device on the object to be observed, as shown in Figure 1a, which provides real-world coordinates of the target in real time, denoted as c tgt = ( X tgt , Y tgt , Z tgt ) . We define the radar’s real-world coordinates as c 0 = ( X 0 , Y 0 , Z 0 ) ; then, the radial distance between the radar and the target is calculated by
R = c tgt c 0 2 = ( X tgt X 0 ) 2 + ( Y tgt Y 0 ) 2 + ( Z tgt Z 0 ) 2 .
With a given time interval Δ t , we compute the target’s Doppler velocity at time t as
V = ( c tgt t c tgt t Δ t ) · ( c 0 c tgt t ) | | c 0 c tgt t | | 2 · Δ t ,
where c tgt t and c tgt t Δ t represent the real-world coordinates of the target at time t and t Δ t , respectively.
We then transform the target’s radial distance and velocity into the radar’s range–Doppler coordinate system. The range coordinate and the Doppler coordinate are calculated by r = R r res and d = V d res , respectively, where [ · ] denotes the rounding operation, while r res and d res represent FFT range and velocity resolution, accordingly. However, the transformation errors caused by the resolution limits of the GPS device and the radar’s range–Doppler coordinate system are likely to impact the optimization of target detection algorithms.
To mitigate this issue, we correct the calculated target position p = ( r , d ) within the RD map by retaining the target peak region as its corrected position, in accordance with the classical radar peak detection principles [18,32]. Specifically, we first apply the CA-CFAR algorithm [18] to extract the potential peak regions for each target from the radar RD map. For this 2D CA-CFAR implementation, both the guard cell and reference cell sizes are set to 1, and the threshold scaling factor is 1.2. Next, we perform a non-maximum suppression (NMS) operation to obtain the target centers, which retains only local peak points within a window size of 3, denoted as Ω . We then search within the peak point set Ω for a point p peak = ( r peak , d peak ) , which is closest to the calculated target position p , and consider it the reliable target position. Finally, we expand p peak along the range and Doppler dimensions with a half-length of l to define the target potential position. Therefore, the target position on the radar RD map is annotated by the rectangular region { ( r peak l , d peak l ) , , ( r peak , d peak ) , , ( r peak + l , d peak + l ) } . For the KuRALS-CW data, l is set to 3 for both dimensions. For the KuRALS-PD data, l remains 3 along the range axis but is extended to 5 along the Doppler axis to better encompass the target’s spectral spread. To assess the consistency between CFAR detections and GPS-projected positions, we compute the median, interquartile range (IQR) and 95th percentile of their differences in both range and Doppler bins. Results show that the median, IQR and 95th percentile in the range direction are 0, 0 and 3, respectively. In the Doppler direction, all three statistics are 0. These results indicate that the GPS projection is highly accurate, and the CFAR detections slightly adjust a small number of outliers.
For the category information of the target, since the target is a collaborative object, we can directly obtain its category. Specifically, utilizing the aforementioned GPS-based localization method, we projected the cooperative target’s range and Doppler values onto the RD map. Based on this projection, each target in the RD map is assigned a class label according to its known platform identity (e.g., UAV, pedestrian, car or boat). This strategy ensures reliable category labeling even when the target occupies only a few pixels in the RD map, avoiding the ambiguity and subjectivity of manual annotation. To facilitate algorithm optimization, we provide the annotations in both dictionary file format and 3D matrix format. The dimensions of the 3D matrix are C × H × W , and each channel corresponds to a binary map defining the locations of a specific target category, with C being the number of categories and H and W representing the height and width of the radar RD map, respectively.
To verify the effectiveness of the automatic labeling method, we randomly select representative frames from KuRALS-CW and KuRALS-PD datasets and visualize their RD maps together with the corresponding automatically generated pixel-wise masks, as shown in Figure 7. It can be clearly observed that the automatically generated masks accurately cover the target regions, demonstrating the reliability of the proposed labeling approach.

3.5. Dataset Distribution

We organize the processed data and annotations into the KuRALS dataset, which is divided into the KuRALS-CW and KuRALS-PD datasets based on the radar types used for data collection. The KuRALS dataset contains radar data from various scenarios, and the scenario statistics are summarized in Table 4. We retain more than 18k labeled frames, with each labeled instance including its category, 2D or 3D spatial coordinates and velocity information.
The category distributions for the KuRALS-CW and KuRALS-PD datasets are shown in Figure 8 and Figure 9, respectively. We also present the range and velocity distributions for different categories in Table 5. The KuRALS-CW dataset is focused on detecting targets including UAV, pedestrian and car, while the KuRALS-PD dataset is designed to detect UAV, pedestrian, car and boat. Given that radar target detection tasks typically demand a false alarm rate (FAR) below 1 × 10 5 , we split the datasets into training, validation and test sets at a ratio of 8 : 1 : 1 to ensure that the validation and test sets achieve a FAR resolution of 1 × 10 7 . For clarity, we calculate the FAR resolution of the test sets of both the KuRALS-CW dataset and the KuRALS-PD dataset. The total number of pixels in the test set is denoted by N, where N t represents the number of target pixels and N b is the number of background pixels. Due to the scale difference between the target and the detection range, targets occupy only a sparse portion of the RD map, leading to N N t , and thus N b = N N t N . We define the number of false-positive target pixels as N f p , with the resolution Δ N f p = 1 . Consequently, the resolution of the FAR in the test set of the KuRALS-CW dataset is given by
Δ N f p N b 1 N = 1 2586 × 0.1 × 124 × 2048 1.5 × 10 8 .
Similarly, for the test set of the KuRALS-PD dataset, we can calculate its FAR resolution as follows:
Δ N f p N b 1 15655 × 0.1 × 64 × 800 1.2 × 10 8 .
Both FAR resolutions are well below the required resolution threshold of 1 × 10 7 , thereby ensuring sufficient precision in evaluating the FAR.

4. Method

4.1. KuRALS-Net

To accurately retrieve target information from radar RD maps in the KuRALS dataset, we need to design an automatic target detection framework. This framework predicts target category and estimates radial distance and velocity from detected target locations. By further integrating azimuth and elevation information derived from directional beam angles, it yields comprehensive target descriptions that can be directly applied for downstream decision-making tasks.
To achieve high-resolution target detection while ensuring efficient deployment on resource-constrained devices, we propose a lightweight RSS baseline model named KuRALS-Net, as illustrated in Figure 10. KuRALS-Net performs pixel-level localization and classification of targets on the radar RD map. It takes a 2D RD map as input and employs a Convolutional Automatic Encoder–Decoder (CAED) framework to process the data, outputting pixel-wise classification results with dimensions of C × H × W . Specifically, KuRALS-Net leverages an encoder composed of multiple 2D convolutional layers to extract high-level representations and applies spatial max-pooling layers to reduce the size of feature maps. The encoded features are then passed through a convolution block and an ASPP module to integrate multi-scale contextual information. In the decoder, the features are upsampled and refined through a series of convolutional layers, ultimately restoring the original input resolution and generating pixel-wise class predictions via final convolutional layers.
The KuRALS-Net versions for the KuRALS-CW and KuRALS-PD datasets have only 1.2 million and 1.1 million parameters, respectively. Their difference in parameter count arises from an additional downsampling and upsampling of the Doppler dimension in KuRALS-CW data to match the Doppler dimension of the KuRALS-PD data.

4.2. Noisy Background Suppression Loss

As analyzed in Section 3.1 and Section 3.3, CW radar data suffers from much stronger background interference compared with PD radar data. To better understand this phenomenon, we conduct a quantitative analysis of background interference across the two datasets. Specifically, we evaluate interference from two complementary perspectives.
(i) Input data statistics. Let { x b ( i ) } i = 1 N b and { x f ( j ) } j = 1 N f denote the sets of background and foreground pixels, respectively. Their average magnitudes are
x ¯ b = i = 1 N b x b ( i ) N b , x ¯ f = j = 1 N f x f ( j ) N f .
We define the signal-to-noise ratio (SNR) as
SNR = x ¯ f x ¯ b .
When the magnitude of a background sample exceeds the threshold t = x ¯ b + x ¯ f / 2 , we regard it as a noisy sample. Then we calculate the ratio of noisy background samples to foreground samples as
r number = i = 1 N b I ( x b ( i ) > t ) N f .
Finally, we analyze the SNR and r number of the two datasets to compare their background interference intensity. As shown in Table 6, the SNR of KuRALS-CW is significantly lower than that of KuRALS-PD, and its r number is much higher. This indicates that the background interference intensity in the KuRALS-CW dataset is significantly higher than that in the KuRALS-PD dataset.
(ii) Training loss distribution. We train the KuRALS-Net model with a standard cross-entropy (CE) loss function and then calculate the ratio of accumulated background loss to accumulated foreground loss after training convergence. This ratio indicates the relative optimization difficulty between background and foreground classes, and it is denoted as
r loss = i = 1 N b L ψ ( x b ( i ) ; W ) j = 1 N f L ψ ( x f ( j ) ; W ) ,
where ψ ( · ; W ) denotes the KuRALS model parameterized by weights W , and L ( · ) represents the CE loss function with a softmax operation. As shown in Table 6, the background loss in the KuRALS-CW dataset is significantly higher than the foreground loss, while in the KuRALS-PD dataset, the background loss constitutes a much smaller proportion. This further confirms that CW radar data suffers from more severe background interference than PD radar. Since the background interference often resembles targets of interest, detection models tend to misclassify noisy background samples as foreground, which in turn degrades their ability to recognize the desired targets. Unfortunately, conventional loss functions, such as CE loss, do not incorporate mechanisms to explicitly address this issue.
Motivated by these observations, we propose the NBS loss. Empirically, most noisy background samples originate from strong clutter or non-focused foreground objects, while all annotated foreground targets have been manually verified and are highly reliable. Therefore, NBS loss suppresses the contribution of overly difficult or unreliable background samples, allowing the model to prioritize the optimization of foreground targets. This design reduces the impact of noisy background samples on gradient updates and improves overall detection capability.
Formally, we utilize the detection model’s prediction probability for the background class as an indicator of the interference degree, where background samples with strong interference generally have low prediction probabilities. To mitigate their impact, we introduce a probability threshold τ to filter out such samples and suppress their corresponding training loss. The proposed NBS loss is defined as
NBS ( p , y ) = i = 1 C y i · f ( p i ) , if y 1 = 1 & p 1 < τ i = 1 C y i · log p i , otherwise ,
where p is the prediction probability vector; f ( · ) acts as a recalibration function in place of log ( · ) ; y = [ y 1 , y 2 , , y i , , y C ] T is the one-hot label; C is the number of classes; and the index of background class is 1. To allow different suppression strengths, we define two variants of the recalibration function f ( · ) :
f 1 ( p i ) = p i τ + log ( τ ) 1 , f 2 ( p i ) = p i 2 2 τ 2 + log ( τ ) 0.5 ,
corresponding to the first-order ( NBS 1 ) and second-order ( NBS 2 ) suppression schemes, respectively.
An important property of NBS loss is that it preserves both value continuity and gradient continuity at the boundary y 1 = 1 , p 1 = τ . Specifically, for the loss value:
i = 1 C y i · log p i | y 1 = 1 , p 1 = τ = log τ , i = 1 C y i · f 1 ( p i ) | y 1 = 1 , p 1 = τ = log τ , i = 1 C y i · f 2 ( p i ) | y 1 = 1 , p 1 = τ = log τ .
For the gradient with respect to p 1 :
( i = 1 C y i · log p i ) p 1 | y 1 = 1 , p 1 = τ = 1 τ , ( i = 1 C y i · f 1 ( p i ) ) p 1 | y 1 = 1 , p 1 = τ = 1 τ , ( i = 1 C y i · f 2 ( p i ) ) p 1 | y 1 = 1 , p 1 = τ = 1 τ .
Therefore, NBS loss ensures smooth transitions between suppressed and unsuppressed states, avoiding gradient instability during training.
In Figure 11, we illustrate the visual comparison between NBS loss and CE loss in terms of the loss curves for background samples. Compared to CE loss, NBS loss exerts a suppression effect on the loss of noisy (low-probability) background samples, thereby facilitating model optimization and enhancing target detection performance.

5. Experiments

To explore effective target segmentation frameworks for the KuRALS dataset, we evaluate a range of representative RSS models, including our proposed KuRALS-Net. In addition, we assess advanced radar perception modules when integrated with KuRALS-Net. The performance of these algorithms is examined from multiple perspectives, including segmentation accuracy, parameter count, computation complexity, memory consumption and inference time. Finally, we compare commonly used RSS loss functions with our proposed NBS loss.

5.1. Experimental Setup

Implementation details. All models are trained using the Adam optimizer [36] with an initial learning rate of 1 × 10 4 , which decays to a minimum of 1 × 10 7 following a cosine annealing schedule. All experiments are conducted on an NVIDIA RTX 3090 GPU with 24 GB memory and an Intel Xeon E5-2620 v4 CPU, using PyTorch 1.10.1 and CUDA 11.1. We train all models for 300 epochs with a batch size of 6. The inference batch size is set to 1 to emulate a real-time deployment scenario, and mixed precision is not employed. Unless otherwise specified, the CE loss serves as the default optimization objective. To mitigate overfitting, we employ data augmentation techniques including random horizontal and vertical flipping of RD maps.
Evaluation metrics. We adopt Intersection over Union (IoU) and Dice score as the primary evaluation metrics for target segmentation performance. The IoU for a given class is defined as
IoU = T P T P + F P + F N ,
where T P , F P and F N denote the number of true-positive, false-positive and false-negative pixels, respectively. Similarly, the Dice score is defined as
Dice = 2 T P 2 T P + F P + F N .
For overall evaluation, we report mean IoU (mIoU) and mean Dice score (mDice), computed by averaging the respective metrics across all categories in the test set. In addition to segmentation accuracy, we analyze the parameter count and the computational complexity in terms of multiply–accumulate operations (MACs), as well as the inference speed measured by frames per second (FPS). These complementary metrics provide a holistic view of both the accuracy and the efficiency of the evaluated models.

5.2. Comparison of Different Models

We first benchmark KuRALS-Net against a wide range of semantic segmentation models on the KuRALS dataset, including CNN-based architectures (FCN8s [26], U-Net [27], DeepLabv3+ [28], HRNet [37]), Transformer-based models (Swin Transformer [38], SegFormer [39]) and radar-specific semantic segmentation models (RSSNet [29]). Quantitative results are summarized in Table 7 and Table 8. On the KuRALS-PD dataset, KuRALS-Net establishes a new state of the art (SoTA), surpassing HRNet (the strongest baseline) by 5.8 % in mIoU and 5.9 % in mDice while maintaining a significantly smaller model size. On the KuRALS-CW dataset, RSS-Net achieves the best accuracy, but KuRALS-Net delivers highly competitive results using only 11.9 % of RSS-Net’s parameters, highlighting its efficiency advantage. Visual comparisons in Figure 12 further confirm these findings: KuRALS-Net produces more precise radar target localization and class discrimination, particularly for challenging small or distant objects in the KuRALS-PD dataset. In addition, we observe that most models achieve low IoU and Dice scores for the pedestrian and car classes on the KuRALS-CW dataset. There are two reasons that explain this. First, pedestrians have weak radar reflections and low radar cross section, making them difficult to distinguish from background clutter or other objects. This problem becomes more severe in CW mode because of its lower instantaneous transmit power and weaker detection capability compared with PD radar. Second, for vehicles, the sample size in KuRALS-CW is relatively small, and the strong background noise further leads to confusion with other classes. After introducing the proposed NBS loss, presented in Section 5.4, the detection accuracy for vehicles improves markedly, confirming the effectiveness of the loss design.
To evaluate the potential influence of randomness on model performance, we conduct three independent experiments for KuRALS-Net with different random seeds and report the mean and standard deviation of the results in Table 9. The results show that KuRALS-Net exhibits minor performance fluctuations across different runs, where the variance of both mIoU and mDice is no more than 0.2, indicating that the model achieves stable segmentation performance.
To further assess efficiency, Table 10 reports the parameter count, computational complexity (MACs), memory consumption and inference speed across both datasets. KuRALS-Net consistently achieves a favorable trade-off, offering competitive runtime efficiency with a fraction of the parameter count compared to heavy-weight CNN or Transformer-based architectures. In particular, KuRALS-Net achieves 181 FPS on KuRALS-PD and 53 FPS on KuRALS-CW. The former reaches a speed close to the 188 FPS requirement for strict real-time inference, while the latter clearly exceeds the 21 FPS threshold, demonstrating the model’s strong efficiency across both datasets.
While the above analysis focuses on pure model inference speed, a complete evaluation of deployment efficiency should also account for the time spent on data generation and preprocessing. Therefore, we further report the latency of these stages in Table 11. Compared with KuRALS-PD, the KuRALS-CW data have larger dimensions, which require additional computation during FFT-based radar frequency map generation. Conversely, KuRALS-PD involves an additional fusion operation during preprocessing, leading to slightly higher latency in this stage. Overall, the total time cost of data generation and preprocessing remains relatively small. Moreover, these operations can be significantly reduced through parallelization or hardware acceleration, indicating that the model inference itself continues to meet real-time deployment requirements.
To evaluate the effectiveness of KuRALS-Net for surveillance tasks, we measure its probability of detection (PD) and FAR, as summarized in Table 12. We report both class-wise and overall PD and FAR values for foreground targets. On the KuRALS-PD dataset, KuRALS-Net achieves a PD of 54.8% at a FAR of 8.0 × 10 6 , while on the KuRALS-CW dataset, it achieves a PD of 76.6% at a FAR of 1.0 × 10 5 . For comparison, we also present the detection performance of the classical radar detection algorithm 2D CA-CFAR, whose receiver operating characteristic (ROC) curves on the KuRALS-PD and KuRALS-CW datasets are shown in Figure 13. When the FAR is below 1 × 10 5 , CA-CFAR exhibits a significantly lower detection probability than KuRALS-Net, demonstrating the superior potential of our proposed deep learning-based approach for surveillance tasks.

5.3. Investigation of Module Optimization

To further evaluate the contribution of key components in KuRALS-Net, we conduct a set of module-level investigations, and the results are summarized in Table 13 and Table 14. The analysis is divided into two parts.
First, we analyze the role of the ASPP module, which aggregates multi-scale contextual information and is central to KuRALS-Net’s segmentation capability. Two variants are considered: (i) KuRALS-Net without ASPP and (ii) KuRALS-Net with ASPP replaced by Adaptive-Directional Attention (ADA) [6]. The results show that removing ASPP from KuRALS-Net leads to a substantial decrease in segmentation performance, highlighting its indispensability. Replacing ASPP with ADA yields lower accuracy and slower inference, suggesting that although ADA serves a similar function, it is less effective than ASPP in this setting.
Second, we explore radar-specific convolutional designs in the encoder, as the encoder is responsible for extracting discriminative high-level features. Specifically, the last two convolutional blocks in the encoder are replaced with PeakConv (PKC) [32] or AdaPKC ( AdaPKC θ and AdaPKC ξ ) [33]. As shown in the results, both PKC and AdaPKC consistently enhance segmentation performance. Notably, KuRALS-Net equipped with AdaPKC ξ achieves improvements of 9.4 % mIoU and 9.0 % mDice over HRNet on the KuRALS-PD dataset. On KuRALS-CW, KuRALS-Net with AdaPKC θ significantly outperforms RSS-Net on both metrics. Nevertheless, these radar-specific convolutions incur additional computational overhead and lead to reduced inference speed due to the absence of dedicated acceleration strategies.
In summary, the current configuration of KuRALS-Net offers a well-balanced trade-off between segmentation accuracy and efficiency, while the integration of radar-specific modules demonstrates clear potential for further performance enhancement.

5.4. Analysis of Loss Functions

Using KuRALS-Net as the base model, we evaluate the effectiveness of both the proposed NBS loss and commonly adopted loss functions on KuRALS dataset, including pixel-level losses (CE loss, WCE loss and Focal loss [40]) and region-level losses (Dice loss and generalized Dice (GDice) [41]). The results are summarized in Table 15 and Table 16.
Region-level losses are consistent with evaluation metrics such as mDice. However, they often introduce training instability, which hampers model optimization and leads to inferior target segmentation accuracy.
For pixel-level losses, WCE loss aims to address the severe class imbalance in radar data [2], as illustrated in the first row of Table 17, where background pixels overwhelmingly dominate foreground pixels. Nonetheless, its performance markedly declines compared with CE loss. The underlying reason is that WCE loss reweights CE loss using the inverse frequency of each class, equalizing class contributions by averaging their per-class loss. In radar target detection, however, most background pixels correspond to clean or weak-signal regions that are trivially classified, yielding negligible loss values. This results in a much lower average background loss compared to the foreground class, as shown in the second row of Table 17. Consequently, WCE loss indiscriminately suppresses the background contribution, leading to an overemphasis on the foreground class and resulting in biased optimization. As depicted in Figure 14, KuRALS-Net trained with WCE loss exhibits severe foreground overfitting, leading to frequent misclassification of background pixels and ultimately degrading segmentation performance. Beyond the WCE loss, the Focal loss adaptively adjusts pixel-wise weighting according to the prediction difficulty, allowing the model to better capture both challenging foreground and background samples. This mechanism mitigates the overfitting issue of WCE loss that tends to emphasize foreground classes while neglecting background pixels. As shown in Table 15 and Table 16, Focal loss achieves better segmentation performance than WCE loss. However, its adaptive reweighting may also lead to excessive emphasis on hard background samples. In the KuRALS dataset, many of these difficult background samples correspond to strong clutter, and such overemphasis can hinder the model’s optimization process.
In contrast, the proposed NBS loss explicitly introduces a suppression mechanism for hard background interference and demonstrates consistent performance improvements. On the KuRALS-CW dataset, it achieves significant gains over other loss functions, while on KuRALS-PD it yields moderate yet stable benefits. The key mechanism lies in its ability to attenuate the influence of noisy background samples, thereby preventing the model from overfocusing on unreliable regions and enabling more effective foreground target recognition.
We further analyze the dataset-specific behavior of NBS loss. As shown in Figure 15 and Figure 16, the effect of the threshold parameter τ differs between KuRALD-CW and KuRALS-PD. KuRALS-CW requires a larger τ and benefits more from stronger suppression ( NBS 2 ), reflecting the higher level of background interference and the greater need to suppress noisy samples. By contrast, KuRALS-PD, with relatively clean backgrounds, favors a smaller τ and exhibits improvements primarily under weak suppression ( NBS 1 ), leading to a narrower performance margin from NBS loss. This observation reinforces our analysis in Section 4.2 and further highlights that the benefit of NBS loss correlates with the degree of background interference in radar perception tasks.

6. Conclusions

In this paper, we introduce real-measured multi-scene and multi-target Ku-band surveillance RD datasets, KuRALS-CW and KuRALS-PD. The datasets cover a wide range of application scenarios and are accompanied by high-quality annotations. They address the limitations of existing surveillance radar datasets, including the lack of diversity in scenes and target types, the predominance of static targets that do not reflect real-world surveillance needs and the limited dynamic range of detection distances. In addition, to explore effective target detection models for this radar surveillance data, we propose a lightweight RSS baseline framework named KuRALS-Net, which achieves a better balance of segmentation accuracy and parameter efficiency on the datasets than other advanced models. With the integration of radar-specific convolution modules, KuRALS-Net achieves SoTA segmentation performance across all these datasets. Additionally, we present a feasible method for analyzing the level of background interference across different datasets, and we propose the NBS loss, specifically designed for radar data with strong background interference. Compared to commonly used RSS loss functions, NBS loss enhances the model’s anti-interference capabilities, achieving superior segmentation performance on both datasets. Overall, the KuRALS dataset, the KuRALS-Net framework, the NBS loss function and the comprehensive evaluation of models, modules and loss functions collectively establish a new RD-level benchmark in the field of radar target surveillance.
For future work, we plan to first expand the KuRALS dataset by collecting more radar surveillance sequences across diverse environments and sensing conditions. This will allow us to adopt a full sequence-level split for training, validation and testing, thereby providing a more rigorous evaluation of model generalization and minimizing any potential near-duplicate leakage. In addition, the expanded data collection will improve the coverage of maritime targets, addressing the current imbalance in sea–surface scenarios.
Beyond data expansion, we aim to investigate multi-modal fusion with complementary sensors such as cameras and LiDAR, aiming to exploit cross-domain information for more robust perception in complex and cluttered environments. Furthermore, we intend to explore advanced training paradigms such as self-supervised pretraining to better leverage large-scale unlabeled radar data. Additionally, we will investigate cross-domain transfer strategies across different radar types and observation scenarios to further enhance the model’s adaptability and generalization.
Finally, we will focus on the real-time deployment of radar-specific modules by developing efficient acceleration strategies to further enhance the practicality of KuRALS-Net.

Author Contributions

Conceptualization, T.L. and L.Z.; methodology, T.L., L.Z. and Q.L.; software, Y.Z. and X.Z.; validation, T.L., Y.Z. and X.Z.; formal analysis, L.Z. and Z.L.; investigation, T.L.; resources, Q.L.; data curation, Z.L.; writing—original draft preparation, T.L. and X.Z.; writing—review and editing, L.Z., Y.Z., Z.L. and Q.L.; visualization, T.L.; supervision, Q.L.; project administration, L.Z. and Q.L.; funding acquisition, L.Z. and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Young Science Foundation of National Natural Science Foundation of China (No. 62206258) and in part by the National Natural Science Foundation of China (No. U23B2030).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets and code are publicly available at https://github.com/lihua199710/KuRALS (accessed on 28 December 2025). The post-FFT complex RD data and the derived KuRALS dataset are publicly available. Due to proprietary constraints, the raw radar echoes, the raw in-phase and quadrature (IQ) data in the fast-time × slow-time domain and most of the processing parameters used to transform the raw radar echoes or the IQ data into complex RD representations cannot be publicly released. All UAV, car and boat operations, as well as radar usage, complied with relevant regulations. Geolocations have been anonymized where necessary, and the data release has been approved by CASIC. Users of this dataset should cite this publication and acknowledge that the dataset was collected by the Intelligent Science and Technology Academy of CASIC.

Acknowledgments

This work was conducted during Teng Li’s internship at the Intelligent Science and Technology Academy of CASIC, with project leadership provided by Liwen Zhang.

Conflicts of Interest

Authors Youcheng Zhang and Liwen Zhang were employed by the company Intelligent Science and Technology Academy of CASIC. Author Xinyan Zhang was employed by the company Alibaba (China) Network Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Wang, Y.; Jiang, Z.; Li, Y.; Hwang, J.-N.; Xing, G.; Liu, H. RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by Camera-Radar Fused Object 3D Localization. IEEE J. Sel. Top. Signal Process. 2021, 15, 954–967. [Google Scholar] [CrossRef]
  2. Ouaknine, A.; Newson, A.; Perez, P.; Tupin, F.; Rebut, J. Multi-View Radar Semantic Segmentation. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 15651–15660. [Google Scholar] [CrossRef]
  3. Fang, S.; Zhu, H.; Bisla, D.; Choromanska, A.; Ravindran, S.; Ren, D.; Wu, R. ERASE-Net: Efficient Segmentation Networks for Automotive Radar Signals. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 9331–9337. [Google Scholar] [CrossRef]
  4. Yan, H.; Li, Y.; Wang, L.; Chen, S. Learning Omni-Dimensional Spatio-Temporal Dependencies for Millimeter-Wave Radar Perception. Remote Sens. 2024, 16, 4256. [Google Scholar] [CrossRef]
  5. Zou, H.; Xie, Z.; Ou, J.; Gao, Y. TransRSS: Transformer-Based Radar Semantic Segmentation. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 6965–6972. [Google Scholar] [CrossRef]
  6. Dalbah, Y.; Lahoud, J.; Cholakkal, H. TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View Radar Semantic Segmentation. In Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2024; pp. 352–361. [Google Scholar] [CrossRef]
  7. Zhang, L.W.; Pan, J.; Zhang, Y.C.; Chen, Y.P.; Ma, Z.; Huang, X.H.; Sun, K.W. Capturing temporal-dependence in radar echo for spatial-temporal sparse target detection. J. Radars 2023, 12, 356–375. [Google Scholar] [CrossRef]
  8. Qu, Q.; Liu, W.; Wang, J.; Li, B.; Liu, N.; Wang, Y.-L. Enhanced CNN-Based Small Target Detection in Sea Clutter With Controllable False Alarm. IEEE Sens. J. 2023, 23, 10193–10205. [Google Scholar] [CrossRef]
  9. Jiang, W.; Liu, Z.; Wang, Y.; Lin, Y.; Li, Y.; Bi, F. Realizing Small UAV Targets Recognition via Multi-Dimensional Feature Fusion of High-Resolution Radar. Remote Sens. 2024, 16, 2710. [Google Scholar] [CrossRef]
  10. Wu, K.; Zhang, Z.; Chen, Z.; Liu, G. Object-Enhanced YOLO Networks for Synthetic Aperture Radar Ship Detection. Remote Sens. 2024, 16, 1001. [Google Scholar] [CrossRef]
  11. Chen, X.; Yuan, W.; Du, X.; Yu, G.; He, X.; Guan, J.; Wang, X. Multiband FMCW radar LSS-target detection dataset (LSS-FMCWR-1.0) and high-resolution micromotion feature extraction method. J. Radars 2024, 13, 539–553. [Google Scholar] [CrossRef]
  12. Song, Z.; Hui, B.; Fan, H.; Zhou, J.; Zhu, Y.; Da, K.; Zhang, X.; Su, H.; Jin, W.; Zhang, Y.; et al. A Dataset for Detection and Tracking of Dim Aircraft Targets through Radar Echo Sequences. China Sci. Data 2020, 5, 272–285. [Google Scholar] [CrossRef]
  13. McMaster University, IPIX Radar Database. Available online: http://soma.mcmaster.ca/ipix.php (accessed on 12 September 2025).
  14. Liu, N.; Dong, Y.; Wang, G.; Ding, H.; Huang, Y.; Jian, G.; Chen, X.; He, Y. Sea-detecting X-band Radar and Data Acquisition Program. J. Radars 2019, 8, 656–667. [Google Scholar] [CrossRef]
  15. Ouaknine, A.; Newson, A.; Rebut, J.; Tupin, F.; Pérez, P. CARRADA Dataset: Camera and Automotive Radar with Range- Angle- Doppler Annotations. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 5068–5075. [Google Scholar] [CrossRef]
  16. Madani, S.; Guan, J.; Ahmed, W.; Gupta, S.; Hassanieh, H. Radatron: Accurate Detection Using Multi-Resolution Cascaded MIMO Radar. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 160–178. [Google Scholar] [CrossRef]
  17. Paek, D.H.; Kong, S.H.; Wijaya, K.T. K-radar: 4D radar object detection for autonomous driving in various weather conditions. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; pp. 3819–3829. [Google Scholar]
  18. Richards, M.A. Fundamentals of Radar Signal Processing, 2nd ed.; McGraw-Hill Publishing Company: McGraw-Hill, NY, USA, 2005. [Google Scholar]
  19. Wang, Y.; Jiang, Z.; Gao, X.; Hwang, J.-N.; Xing, G.; Liu, H. RODNet: Radar Object Detection Using Cross-Modal Supervision. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Virtual, 5–9 January 2021; pp. 504–513. [Google Scholar] [CrossRef]
  20. Wang, Z.; Hu, G.; Zhao, S.; Wang, R.; Kang, H.; Luo, F. Local Pyramid Vision Transformer: Millimeter-Wave Radar Gesture Recognition Based on Transformer with Integrated Local and Global Awareness. Remote Sens. 2024, 16, 4602. [Google Scholar] [CrossRef]
  21. Sheeny, M.; De Pellegrin, E.; Mukherjee, S.; Ahrabian, A.; Wang, S.; Wallace, A. RADIATE: A Radar Dataset for Automotive Perception in Bad Weather. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 1–7. [Google Scholar] [CrossRef]
  22. Barnes, D.; Gadd, M.; Murcutt, P.; Newman, P.; Posner, I. The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6433–6438. [Google Scholar] [CrossRef]
  23. Rebut, J.; Ouaknine, A.; Malik, W.; Pérez, P. Raw High-Definition Radar for Multi-Task Learning. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17000–17009. [Google Scholar] [CrossRef]
  24. Palffy, A.; Pool, E.; Baratam, S.; Kooij, J.F.P.; Gavrila, D.M. Multi-Class Road User Detection With 3+1D Radar in the View-of-Delft Dataset. IEEE Robot. Autom. Lett. 2022, 7, 4961–4968. [Google Scholar] [CrossRef]
  25. Zhang, X.; Wang, L.; Chen, J.; Fang, C.; Yang, G.; Wang, Y.; Yang, L.; Song, Z.; Liu, L.; Zhang, X.; et al. Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autononous Driving. Sci. Data 2025, 12, 439. [Google Scholar] [CrossRef] [PubMed]
  26. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar] [CrossRef]
  27. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef]
  28. Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
  29. Kaul, P.; de Martini, D.; Gadd, M.; Newman, P. RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW Radar. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 431–436. [Google Scholar] [CrossRef]
  30. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  31. Jia, F.; Li, C.; Bi, S.; Qian, J.; Wei, L.; Sun, G. TC-Radar: Transformer-CNN Hybrid Network for Millimeter-Wave Radar Object Detection. Remote Sens. 2024, 16, 2881. [Google Scholar] [CrossRef]
  32. Zhang, L.; Zhang, X.; Zhang, Y.; Guo, Y.; Chen, Y.; Huang, X.; Ma, Z. PeakConv: Learning Peak Receptive Field for Radar Semantic Segmentation. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 17577–17586. [Google Scholar] [CrossRef]
  33. Li, T.; Zhang, L.; Zhang, Y.; Hu, Z.; Pi, P.; Lu, Z.; Liao, Q.; Ma, Z. AdaPKC: PeakConv with Adaptive Peak Receptive Field for Radar Semantic Segmentation. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 10–15 December 2024; pp. 136545–136575. [Google Scholar]
  34. Rohling, H. Radar CFAR Thresholding in Clutter and Multiple Target Situations. IEEE Trans. Aerosp. Electron. Syst. 1983, 19, 608–621. [Google Scholar] [CrossRef]
  35. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar] [CrossRef]
  36. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.6980. [Google Scholar] [CrossRef]
  37. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep High-Resolution Representation Learning for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed]
  38. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
  39. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. In Proceedings of the Advances in Neural Information Processing Systems, Virtual, 6–14 December 2021; pp. 12077–12090. [Google Scholar]
  40. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef] [PubMed]
  41. Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Jorge Cardoso, M. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentation. In Proceedings of the International Workshop on Deep Learning in Medical Image Analysis, Québec City, QC, Canada, 14 September 2017; pp. 240–248. [Google Scholar] [CrossRef]
Figure 1. Illustration of the radar surveillance process in KuRALS dataset. (a) The radar system is capable of detecting targets across a wide range of scenarios, including aerial UAVs, terrestrial pedestrians and cars, and marine ships. (b) To ensure detection algorithm optimization and evaluation, we provide precise annotations of target status through a combination of Global Positioning System (GPS) auto-labeling and algorithmic correction. By analyzing the radar range–Doppler (RD) spectrogram, the detection algorithm retrieves target distance, velocity and category information. Additionally, we calculate the azimuth and elevation of each target by utilizing the directional beam angle of the transmitted signal.
Figure 1. Illustration of the radar surveillance process in KuRALS dataset. (a) The radar system is capable of detecting targets across a wide range of scenarios, including aerial UAVs, terrestrial pedestrians and cars, and marine ships. (b) To ensure detection algorithm optimization and evaluation, we provide precise annotations of target status through a combination of Global Positioning System (GPS) auto-labeling and algorithmic correction. By analyzing the radar range–Doppler (RD) spectrogram, the detection algorithm retrieves target distance, velocity and category information. Additionally, we calculate the azimuth and elevation of each target by utilizing the directional beam angle of the transmitted signal.
Remotesensing 18 00173 g001
Figure 2. Illustration of the preprocessing pipeline for PD radar data. In this process, narrow pulse wave and wide pulse wave data are integrated into a unified single-frame RD map.
Figure 2. Illustration of the preprocessing pipeline for PD radar data. In this process, narrow pulse wave and wide pulse wave data are integrated into a unified single-frame RD map.
Remotesensing 18 00173 g002
Figure 3. Representative intermediate results during preprocessing of PD radar data: (a) original narrow pulse RD map, (b) original wide pulse RD map, (c) narrow pulse RD map after zero-frequency elimination, (d) wide pulse RD map after zero-frequency elimination and (e) fused RD map. The zero-frequency elimination effectively suppresses strong background clutter, while the fusion of narrow and wide pulse data provides comprehensive range coverage. Please note that the target’s peak position is indicated by a red asterisk.
Figure 3. Representative intermediate results during preprocessing of PD radar data: (a) original narrow pulse RD map, (b) original wide pulse RD map, (c) narrow pulse RD map after zero-frequency elimination, (d) wide pulse RD map after zero-frequency elimination and (e) fused RD map. The zero-frequency elimination effectively suppresses strong background clutter, while the fusion of narrow and wide pulse data provides comprehensive range coverage. Please note that the target’s peak position is indicated by a red asterisk.
Remotesensing 18 00173 g003
Figure 4. Comparison of processed RD data frames containing different target categories collected by the CW radar and the PD radar in the range–Doppler 2D representation. RD data frames collected by the CW radar: (a) UAV, (b) pedestrian, (c) car. RD data frames collected by the PD radar: (d) UAV, (e) pedestrian, (f) car, (g) boat. To facilitate observation, we provide zoomed-in images of the targets outlined in red.
Figure 4. Comparison of processed RD data frames containing different target categories collected by the CW radar and the PD radar in the range–Doppler 2D representation. RD data frames collected by the CW radar: (a) UAV, (b) pedestrian, (c) car. RD data frames collected by the PD radar: (d) UAV, (e) pedestrian, (f) car, (g) boat. To facilitate observation, we provide zoomed-in images of the targets outlined in red.
Remotesensing 18 00173 g004
Figure 5. Comparison of processed RD data frames containing different target categories collected by the CW radar and the PD radar in the range–Doppler–amplitude 3D representation. RD data frames collected by the CW radar: (a) UAV, (b) pedestrian, (c) car. RD data frames collected by the PD radar: (d) UAV, (e) pedestrian, (f) car, (g) boat. To facilitate observation, we provide zoomed-in images of the targets outlined in red, and the coordinates and amplitude of the target’s peak position are also presented. Please note that the target’s peak position is indicated by a red asterisk.
Figure 5. Comparison of processed RD data frames containing different target categories collected by the CW radar and the PD radar in the range–Doppler–amplitude 3D representation. RD data frames collected by the CW radar: (a) UAV, (b) pedestrian, (c) car. RD data frames collected by the PD radar: (d) UAV, (e) pedestrian, (f) car, (g) boat. To facilitate observation, we provide zoomed-in images of the targets outlined in red, and the coordinates and amplitude of the target’s peak position are also presented. Please note that the target’s peak position is indicated by a red asterisk.
Remotesensing 18 00173 g005
Figure 6. Illustration of the RD map annotation pipeline. We first compute the real-world coordinates of the observed targets and transform them into the RD coordinate system. Subsequently, the peak responses in the RD map are leveraged to refine the target positions, which are then expanded into robust rectangular regions for annotation. Note that NMS denotes the non-maximum suppression operation.
Figure 6. Illustration of the RD map annotation pipeline. We first compute the real-world coordinates of the observed targets and transform them into the RD coordinate system. Subsequently, the peak responses in the RD map are leveraged to refine the target positions, which are then expanded into robust rectangular regions for annotation. Note that NMS denotes the non-maximum suppression operation.
Remotesensing 18 00173 g006
Figure 7. Visualization of automatically annotated samples. The first and second rows correspond to one frame from the KuRALS-CW and KuRALS-PD datasets, respectively. First row: (a) RD map, (b) pixel-wise mask. Second row: (c) RD map, (d) pixel-wise mask. In the pixel-wise mask, different colors denote different classes. Black: background, red: UAV, yellow: pedestrian, cyan: car, green: boat. For better visualization, zoomed-in views of the targets outlined in white are provided.
Figure 7. Visualization of automatically annotated samples. The first and second rows correspond to one frame from the KuRALS-CW and KuRALS-PD datasets, respectively. First row: (a) RD map, (b) pixel-wise mask. Second row: (c) RD map, (d) pixel-wise mask. In the pixel-wise mask, different colors denote different classes. Black: background, red: UAV, yellow: pedestrian, cyan: car, green: boat. For better visualization, zoomed-in views of the targets outlined in white are provided.
Remotesensing 18 00173 g007
Figure 8. Category distribution across KuRALS-CW dataset.
Figure 8. Category distribution across KuRALS-CW dataset.
Remotesensing 18 00173 g008
Figure 9. Category distribution across KuRALS-PD dataset.
Figure 9. Category distribution across KuRALS-PD dataset.
Remotesensing 18 00173 g009
Figure 10. Framework of KuRALS-Net. k and d are the kernel size and dilation rate of 2D convolution, respectively, and C is the number of classes. We indicate the channel numbers of input and output feature maps for each module above and below the module.
Figure 10. Framework of KuRALS-Net. k and d are the kernel size and dilation rate of 2D convolution, respectively, and C is the number of classes. We indicate the channel numbers of input and output feature maps for each module above and below the module.
Remotesensing 18 00173 g010
Figure 11. Comparison between NBS loss and CE loss for background samples. p 1 represents the prediction probability for the background class. Superscript numbers in NBS 1 and NBS 2 indicate different settings of NBS loss, as explained in the main text. Compared to CE loss, NBS loss applies a suppression effect on the loss of noisy (low-probability) background samples.
Figure 11. Comparison between NBS loss and CE loss for background samples. p 1 represents the prediction probability for the background class. Superscript numbers in NBS 1 and NBS 2 indicate different settings of NBS loss, as explained in the main text. Compared to CE loss, NBS loss applies a suppression effect on the loss of noisy (low-probability) background samples.
Remotesensing 18 00173 g011
Figure 12. Visual comparison of different models. The top row and the bottom row show the segmentation results of different models on a frame from KuRALS-PD test set and KuRALS-CW test set, respectively. Each row corresponds to the same radar frame. Top row: (a) RD map input, (b) pixel-wise annotation, (c) HRNet, (d) SegFormer-B1, (e) RSS-Net, (f) KuRALS-Net (ours). Bottom row: (g) RD map input, (h) pixel-wise annotation, (i) HRNet, (j) SegFormer-B1, (k) RSS-Net, (l) KuRALS-Net (ours). The vertical axis represents the range dimension of the RD map, while the horizontal axis represents the Doppler dimension. Please note that the ghost in this PD radar RD map corresponds to the remaining zero-frequency interference. To facilitate clearer observation of the input RD data and the segmentation results, we provide zoomed-in images of the target areas outlined in red. Different colors represent different classes. Black: background, red: UAV, yellow: pedestrian, cyan: car, green: boat.
Figure 12. Visual comparison of different models. The top row and the bottom row show the segmentation results of different models on a frame from KuRALS-PD test set and KuRALS-CW test set, respectively. Each row corresponds to the same radar frame. Top row: (a) RD map input, (b) pixel-wise annotation, (c) HRNet, (d) SegFormer-B1, (e) RSS-Net, (f) KuRALS-Net (ours). Bottom row: (g) RD map input, (h) pixel-wise annotation, (i) HRNet, (j) SegFormer-B1, (k) RSS-Net, (l) KuRALS-Net (ours). The vertical axis represents the range dimension of the RD map, while the horizontal axis represents the Doppler dimension. Please note that the ghost in this PD radar RD map corresponds to the remaining zero-frequency interference. To facilitate clearer observation of the input RD data and the segmentation results, we provide zoomed-in images of the target areas outlined in red. Different colors represent different classes. Black: background, red: UAV, yellow: pedestrian, cyan: car, green: boat.
Remotesensing 18 00173 g012
Figure 13. Illustration of the ROC curves of 2D CA-CFAR on the KuRALS-PD and KuRALS-CW datasets. Note that CA-CFAR performs the binary discrimination between foreground and background classes.
Figure 13. Illustration of the ROC curves of 2D CA-CFAR on the KuRALS-PD and KuRALS-CW datasets. Note that CA-CFAR performs the binary discrimination between foreground and background classes.
Remotesensing 18 00173 g013
Figure 14. Comparison of class-wise confusion matrices between CE loss and WCE loss on the KuRALS-CW dataset. The top row corresponds to results on the training set, and the bottom row corresponds to results on the test set. Red indicates performance degradations of WCE loss relative to CE loss, while green indicates improvements.
Figure 14. Comparison of class-wise confusion matrices between CE loss and WCE loss on the KuRALS-CW dataset. The top row corresponds to results on the training set, and the bottom row corresponds to results on the test set. Red indicates performance degradations of WCE loss relative to CE loss, while green indicates improvements.
Remotesensing 18 00173 g014
Figure 15. Effect of the threshold parameter τ in NBS loss on the KuRALS-CW dataset. KuRALS-Net is trained with two versions of NBS loss using τ [ 0 , 0.8 ] to evaluate the influence of this key parameter.
Figure 15. Effect of the threshold parameter τ in NBS loss on the KuRALS-CW dataset. KuRALS-Net is trained with two versions of NBS loss using τ [ 0 , 0.8 ] to evaluate the influence of this key parameter.
Remotesensing 18 00173 g015
Figure 16. Effect of the threshold parameter τ in NBS loss on the KuRALS-PD dataset.
Figure 16. Effect of the threshold parameter τ in NBS loss on the KuRALS-PD dataset.
Remotesensing 18 00173 g016
Table 1. Parameter configurations for the CW radar.
Table 1. Parameter configurations for the CW radar.
ParameterValue
Frequency16 GHz
Chirp Interval370 μs
Frame Rate21 FPS
Maximum Range6371.9 m
Range Resolution3.1128 m
Maximum Radial Velocity12.67 m/s
FFT Radial Velocity Resolution0.198 m/s
Field of View (Azimuth)360°
Azimuth Resolution7.1°
Number of Chirps per Frame128
Number of Samples per Chirp2048
Table 2. Parameter configurations for the PD radar.
Table 2. Parameter configurations for the PD radar.
ParameterValue
Frequency16 GHz
Chirp Interval83.3 μs
Frame Rate188 FPS
Maximum Range5992.5 m
Range Resolution7.5 m
Maximum Radial Velocity56.61 m/s
FFT Radial Velocity Resolution1.769 m/s
Field of View (Azimuth)360°
Azimuth Resolution1.9°
Field of View (Elevation)25°
Elevation Resolution
Number of Chirps per Frame64
Number of Samples per Chirp800
Table 3. Detection range of the narrow and wide pulse wave PD radars in different observation scenarios.
Table 3. Detection range of the narrow and wide pulse wave PD radars in different observation scenarios.
ScenarioDetection Range (m)
Narrow Pulse WaveWide Pulse Wave
Aerial Scene 132.5 1625 1305 5947.5
Land Surface 0 1147.5 855 4447.5
Sea Surface 37.5 1020 892.5 5085
Table 4. Scenario statistics for KuRALS dataset.
Table 4. Scenario statistics for KuRALS dataset.
ScenariosKuRALS-CWKuRALS-PD
# of Seqs# of FramesDuration# of Seqs# of FramesDuration
Aerial Scene8154460.2 min356714 min
Land Surface1104220.4 min4978113.6 min
Sea Surface---220321.2 min
Overall9258680.6 min915,65538.8 min
Table 5. Range and velocity distributions for KuRALS dataset.
Table 5. Range and velocity distributions for KuRALS dataset.
CategoryKuRALS-CWKuRALS-PD
Range (m)Doppler Velocity (m/s)Range (m)Doppler Velocity (m/s)
UAV 24.9 3900.33 12.47 12.67 555 3915 14.15 14.15
Pedestrian 118.29 1627.99 4.55 2.38 82.5 517.5 5.31 5.31
Car 336.18 1011.66 11.09 9.11 112.5 1102.5 12.38 14.15
Boat-- 3277.5 4980 1.77 3.54
Table 6. Quantitative comparison of background interference level between KuRALS-CW and KuRALS-PD datasets. Upward arrows (↑) indicate that higher values are better, while downward arrows (↓) indicate that lower values are better.
Table 6. Quantitative comparison of background interference level between KuRALS-CW and KuRALS-PD datasets. Upward arrows (↑) indicate that higher values are better, while downward arrows (↓) indicate that lower values are better.
DatasetSNR ↑ r number r loss
KuRALS-CW8.1162.61.7
KuRALS-PD26.537.50.1
Table 7. RSS performance comparison of different models on KuRALS-PD dataset. Upward arrows (↑) indicate that higher values are better, while downward arrows (↓) indicate that lower values are better. The best and secondary results are marked with bold and underline, correspondingly. Bkg., Ped. and Boa. are abbreviations for background, pedestrian and boat, respectively.
Table 7. RSS performance comparison of different models on KuRALS-PD dataset. Upward arrows (↑) indicate that higher values are better, while downward arrows (↓) indicate that lower values are better. The best and secondary results are marked with bold and underline, correspondingly. Bkg., Ped. and Boa. are abbreviations for background, pedestrian and boat, respectively.
Frameworks# Params. (M) ↓IoU (%) ↑Dice (%) ↑
Bkg.UAVPed.CarBoa.mIoUBkg.UAVPed.CarBoa.mDice
FCN8s [26]134.399.924.36.310.915.031.399.939.111.919.626.139.3
U-Net [27]17.399.943.012.244.034.046.699.960.121.861.150.758.7
DeepLabv3+ [28]58.899.938.137.061.230.253.399.955.254.075.946.366.3
HRNet [37]65.899.947.822.460.740.454.399.964.736.675.557.666.9
Swin-T [38]59.899.951.510.237.163.852.599.968.018.454.177.963.7
SegFormer-B0 [39]3.799.943.713.448.658.052.799.960.923.665.473.464.6
SegFormer-B1 [39]13.799.946.316.845.654.352.699.963.328.762.670.365.0
RSSNet [29]10.199.948.323.560.838.554.299.965.138.175.655.666.9
KuRALS-Net (ours)1.199.945.733.562.359.060.199.962.750.176.774.272.8
Table 8. RSS performance comparison of different models on KuRALS-CW dataset. The notation of bold, underline and arrows follows Table 7. With much fewer parameters, KuRALS-Net achieves highly competitive performance compared to existing SoTA methods.
Table 8. RSS performance comparison of different models on KuRALS-CW dataset. The notation of bold, underline and arrows follows Table 7. With much fewer parameters, KuRALS-Net achieves highly competitive performance compared to existing SoTA methods.
Frameworks# Params. (M) ↓IoU (%) ↑Dice (%) ↑
Bkg.UAVPed.CarmIoUBkg.UAVPed.CarmDice
FCN8s [26]134.399.921.67.04.533.399.935.513.08.639.3
U-Net [27]17.399.955.626.07.247.299.971.441.213.456.5
DeepLabv3+ [28]58.899.970.923.97.450.599.983.038.513.858.8
HRNet [37]65.899.976.120.78.251.399.986.534.315.159.0
Swin-T [38]59.899.979.022.16.451.999.988.236.212.059.1
SegFormer-B0 [39]3.799.951.010.514.544.099.967.519.025.453.0
SegFormer-B1 [39]13.799.955.814.924.448.899.971.625.939.359.2
RSSNet [29]10.199.970.521.735.056.899.982.735.651.967.5
KuRALS-Net (ours)1.299.971.530.121.655.899.983.346.335.566.3
Table 9. Statistical RSS performance of KuRALS-Net under different random seeds on the KuRALS dataset.
Table 9. Statistical RSS performance of KuRALS-Net under different random seeds on the KuRALS dataset.
DatasetIoU (%)Dice (%)
Bkg.UAVed.CarBoa.mIoUBkg.UAVPed.CarBoa.mDice
KuRALS-PD 99.9 ± 0.0 46.0 ± 0.5 33.2 ± 0.8 62.3 ± 0.4 59.1 ± 0.5 60.1 ± 0.2 99.9 ± 0.0 62.7 ± 0.1 50.0 ± 0.5 76.9 ± 0.3 74.2 ± 0.1 72.7 ± 0.2
KuRALS-CW 99.9 ± 0.0 71.6 ± 0.4 30.4 ± 0.3 22.1 ± 0.3 - 56.0 ± 0.2 99.9 ± 0.0 83.3 ± 0.4 46.5 ± 0.2 35.8 ± 0.2 - 66.4 ± 0.1
Table 10. Complexity and runtime comparison. Runtime is counted on a workstation with an NVIDIA RTX 3090 GPU and an Intel Xeon E5-2620 v4 CPU. Please note that runtime and FPS are directly convertible to each other. The notation of bold, underline and arrows follows Table 7.
Table 10. Complexity and runtime comparison. Runtime is counted on a workstation with an NVIDIA RTX 3090 GPU and an Intel Xeon E5-2620 v4 CPU. Please note that runtime and FPS are directly convertible to each other. The notation of bold, underline and arrows follows Table 7.
FrameworksKuRALS-PDKuRALS-CW
# Params. (M) ↓MACs (G) ↓Memory (MB) ↓Runtime (ms) ↓FPS ↑# Params. (M) ↓MACs (G) ↓Memory (MB) ↓Runtime (ms) ↓FPS ↑
FCN8s134.321.6908.34.6218134.3109.0921.110.992
U-Net17.331.4183.94.422717.3153.8638.315.963
DeepLabv3+58.849.0296.619.35258.8250.5379.831.732
HRNet65.818.3300.051.51965.892.6483.054.019
Swin-T59.847.2333.919.85159.8231.5601.129.534
SegFormer-B03.71.357.310.5953.76.6224.010.893
SegFormer-B113.72.696.011.09113.712.9266.111.389
RSSNet10.114.476.33.627510.170.6226.58.6116
KuRALS-Net (ours)1.132.3122.05.51811.2154.4540.718.753
Table 11. Latency of the radar map generation and preprocessing processes. The data generation stage primarily consists of FFT operations, while the preprocessing stage includes zero-frequency elimination, normalization and fusion operations.
Table 11. Latency of the radar map generation and preprocessing processes. The data generation stage primarily consists of FFT operations, while the preprocessing stage includes zero-frequency elimination, normalization and fusion operations.
OperationRuntime on KuRALS-PDRuntime on KuRALS-CW
FFT0.2 ms1.1 ms
Preprocessing2.1 ms0.4 ms
Overall2.3 ms1.5 ms
Table 12. Surveillance performance evaluation of KuRALS-Net on the KuRALS dataset. Frg. is the abbreviation for foreground and denotes the overall performance across all foreground target classes. Upward arrows (↑) indicate that higher values are better, while downward arrows (↓) indicate that lower values are better.
Table 12. Surveillance performance evaluation of KuRALS-Net on the KuRALS dataset. Frg. is the abbreviation for foreground and denotes the overall performance across all foreground target classes. Upward arrows (↑) indicate that higher values are better, while downward arrows (↓) indicate that lower values are better.
DatasetPD (%) ↑FAR ↓
UAVPed.CarBoa.Frg.UAVPed.CarBoa.Frg.
KuRALS-PD47.438.563.668.854.8 3.2 × 10 6 1.1 × 10 5 2.3 × 10 6 6.8 × 10 7 8.0 × 10 6
KuRALS-CW88.744.735.2-76.6 5.1 × 10 6 6.1 × 10 6 1.0 × 10 6 - 1.0 × 10 5
Table 13. Investigation of KuRALS-Net module optimization on KuRALS-PD dataset. The notation of bold, underline and arrows follows Table 7.
Table 13. Investigation of KuRALS-Net module optimization on KuRALS-PD dataset. The notation of bold, underline and arrows follows Table 7.
Method# Params. (M) ↓MACs (G) ↓Memory (MB) ↓FPS ↑mIoU (%) ↑mDice (%) ↑
KuRALS-Net1.132.3122.018160.172.8
KuRALS-Net w/o ASPP0.727.3112.325139.046.1
KuRALS-Net (ASPP→ADA)1.330.8192.87357.670.2
KuRALS-Net w/PKC1.133.4539.56461.073.4
KuRALS-Net w/ AdaPKC θ 1.133.5762.55661.374.3
KuRALS-Net w/ AdaPKC ξ 1.133.4565.25763.775.9
Table 14. Investigation of KuRALS-Net module optimization on KuRALS-CW dataset. The notation of bold, underline and arrows follows Table 7.
Table 14. Investigation of KuRALS-Net module optimization on KuRALS-CW dataset. The notation of bold, underline and arrows follows Table 7.
Method# Params. (M) ↓MACs (G) ↓Memory (MB) ↓FPS ↑mIoU (%) ↑mDice (%) ↑
KuRALS-Net1.2154.4540.75355.866.3
KuRALS-Net w/o ASPP0.8135.8516.06747.454.0
KuRALS-Net (ASPP→ADA)1.3144.51076.62256.266.1
KuRALS-Net w/PKC1.2157.11365.82057.667.6
KuRALS-Net w/ AdaPKC θ 1.2157.31916.62159.568.8
KuRALS-Net w/ AdaPKC ξ 1.2157.11427.12158.668.2
Table 15. Performance comparison of KuRALS-Net trained with different losses on KuRALS-CW dataset. The notation of bold, underline and arrows follows Table 7.
Table 15. Performance comparison of KuRALS-Net trained with different losses on KuRALS-CW dataset. The notation of bold, underline and arrows follows Table 7.
LossIoU (%) ↑Dice (%) ↑
Bkg.UAVPed.CarmIoUBkg.UAVPed.CarmDice
Dice99.94.414.10.029.699.98.524.70.033.3
GDice99.980.234.30.053.699.989.051.10.060.0
CE99.971.530.121.655.899.983.346.335.566.3
wCE99.925.38.623.739.499.940.415.938.348.6
Focal99.975.629.135.360.099.986.145.152.270.9
NBS 1 (ours)99.981.028.650.465.099.989.544.567.075.3
NBS 2 (ours)99.981.032.553.966.899.989.549.070.177.1
Table 16. Performance comparison of KuRALS-Net trained with different losses on KuRALS-PD dataset. The notation of bold, underline and arrows follows Table 7.
Table 16. Performance comparison of KuRALS-Net trained with different losses on KuRALS-PD dataset. The notation of bold, underline and arrows follows Table 7.
LossIoU (%) ↑Dice (%) ↑
Bkg.UAVPed.CarBoa.mIoUBkg.UAVPed.CarBoa.mDice
Dice99.60.00.342.50.028.599.80.00.659.70.032.0
GDice99.926.530.733.60.038.299.941.947.050.30.047.8
CE99.945.642.160.861.762.099.962.659.375.676.374.8
wCE99.944.232.061.150.557.699.961.348.575.867.170.5
Focal99.953.329.273.146.060.399.969.545.284.563.072.4
NBS 1 (ours)99.946.234.371.166.963.799.963.251.183.180.275.5
NBS 2 (ours)99.934.659.467.651.662.699.951.474.680.768.174.9
Table 17. Illustration of the class imbalance and average CE loss comparison of background and foreground classes in KuRALS. In both KuRALS-CW and KuRALS-PD datasets, the number of background pixels significantly exceeds that of foreground pixels, yet its average CE loss is much lower than that of foreground pixels after training convergence.
Table 17. Illustration of the class imbalance and average CE loss comparison of background and foreground classes in KuRALS. In both KuRALS-CW and KuRALS-PD datasets, the number of background pixels significantly exceeds that of foreground pixels, yet its average CE loss is much lower than that of foreground pixels after training convergence.
KuRALS-CWKuRALS-PD
BackgroundForegroundBackgroundForeground
# of Pixels 5.2 × 10 8 1.9 × 10 4 6.4 × 10 8 1.9 × 10 5
Average CE Loss 2.1 × 10 3 6.4 × 10 3 3.3 × 10 2 8.0 × 10 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, T.; Liao, Q.; Zhang, Y.; Zhang, X.; Lu, Z.; Zhang, L. KuRALS: Ku-Band Radar Datasets for Multi-Scene Long-Range Surveillance with Baselines and Loss Design. Remote Sens. 2026, 18, 173. https://doi.org/10.3390/rs18010173

AMA Style

Li T, Liao Q, Zhang Y, Zhang X, Lu Z, Zhang L. KuRALS: Ku-Band Radar Datasets for Multi-Scene Long-Range Surveillance with Baselines and Loss Design. Remote Sensing. 2026; 18(1):173. https://doi.org/10.3390/rs18010173

Chicago/Turabian Style

Li, Teng, Qingmin Liao, Youcheng Zhang, Xinyan Zhang, Zongqing Lu, and Liwen Zhang. 2026. "KuRALS: Ku-Band Radar Datasets for Multi-Scene Long-Range Surveillance with Baselines and Loss Design" Remote Sensing 18, no. 1: 173. https://doi.org/10.3390/rs18010173

APA Style

Li, T., Liao, Q., Zhang, Y., Zhang, X., Lu, Z., & Zhang, L. (2026). KuRALS: Ku-Band Radar Datasets for Multi-Scene Long-Range Surveillance with Baselines and Loss Design. Remote Sensing, 18(1), 173. https://doi.org/10.3390/rs18010173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop