Next Article in Journal
Satellite Data Reveal Concerns Regarding Mangrove Restoration Efforts in Southern China
Previous Article in Journal
The SSR Brightness Temperature Increment Model Based on a Deep Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Radar Timing Range–Doppler Spectral Target Detection Based on Attention ConvLSTM in Traffic Scenes

1
School of Information Science and Technology, Donghua University, Shanghai 201620, China
2
School of Microelectronic and Communication Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(17), 4150; https://doi.org/10.3390/rs15174150
Submission received: 25 July 2023 / Revised: 19 August 2023 / Accepted: 22 August 2023 / Published: 24 August 2023

Abstract

:
With the development of autonomous driving and the emergence of various intelligent traffic scenarios, object detection technology based on deep learning is more and more widely applied to real traffic scenarios. Commonly used detection devices include LiDAR and cameras. Since the implementation of traffic scene target detection technology requires mass production, the advantages of millimeter-wave radar have emerged, such as low cost and no interference from the external environment. The performance of LiDAR and cameras is greatly reduced due to their sensitivity to light, which affects target detection at night and in bad weather. However, millimeter-wave radar can overcome the influence of these harsh environments and has a great auxiliary effect on safe driving on the road. In this work, we propose a deep-learning-based object detection method considering the radar range–Doppler spectrum in traffic scenarios. The algorithm uses YOLOv8 as the basic architecture, makes full use of the time series characteristics of range–Doppler spectrum data in traffic scenarios, introduces the ConvLSTM network, and exerts the ability to process time series data. In order to improve the model’s ability to detect small objects, an efficient and lightweight Efficient Channel Attention (ECA) module is introduced. Through extensive experiments, our model shows better performance on two publicly available radar datasets, CARRADA and RADDet, compared to other state-of-the-art methods. Compared with other mainstream methods that can only achieve 30–60% mAP performance when the IOU is 0.3, our model can achieve 74.51% and 75.62% on the RADDet and CARRADA datasets, respectively, and has better robustness and generalization ability.

Graphical Abstract

1. Introduction

With the continuous development of unmanned driving technology, the requirements for the safety and stability of vehicle driving are becoming more and more stringent. Advanced driver assistance technology (ADAS) plays a key role in this context. Various sensors are embedded in current intelligent driving vehicles, through which the vehicle perceives itself and its surrounding environment and makes judgments, such as cameras, laser radars, millimeter-wave radars, and other sensors. The two commonly used sensors are cameras and LiDARs. Cameras can capture a large amount of image information and provide rich semantic information. LiDARs can capture distance and speed information of surrounding objects. The information returned by these sensors can assist vehicles in better perceiving the surrounding environment and making timely adjustments according to changes in the environment, supporting the safe driving of the vehicle. Unfortunately, although the resolution of the information provided by cameras and LiDAR is very high, they still have limitations. The light sensitivity of cameras is very high, and they can perform well in dark or low-light conditions or under strong light during the day, but the semantic information that can be provided drops sharply. LiDAR performs poorly in bad weather conditions such as rain, snow, and haze, posing a great safety hazard for vehicle driving. But there are some solutions. Due to poor angular resolution, large noise, and difficult-to-interpret signals, millimeter-wave radar has not been paid attention to in the past few years, but it can measure the speed and distance information of surrounding objects, send out electromagnetic wave signals, and is not affected by bad weather and light conditions, making up for the shortcomings of cameras and LiDAR and demonstrating better stability and reliability. Therefore, at present, millimeter-wave radar has been widely used in the military, aerospace, transportation, Internet of Vehicles, and other fields.
Object classification, target detection, and segmentation are the main tasks in the field of computer vision, among which detection and classification are important parts of the urgent development of intelligent driving. Facing complex traffic scenes, vehicles need to perceive the environment and respond quickly, which requires accuracy and real-time requirements. In the past ten years, deep learning has developed rapidly and made major breakthroughs. Related methods have been successfully applied to cameras and LiDARs [1,2,3]. However, due to the limitations of cameras and LiDARs, it is difficult to detect targets around the vehicle in harsh environments, so this paper refers to a new radar data format. Radar data can be represented as a list of targets (point cloud) or as a raw data tensor (range–Doppler or range–angle–Doppler map). The target list is the default radar data format and contains very low-level information such as the position, velocity, and radar cross-section [4] of targets around the vehicle. Point cloud data are usually processed by LiDAR, and information will be lost in harsh environments; the original data tensor is a relatively low-level and has a widely used data format, and will not lose a lot of information, so in this paper, radar range–Doppler (RD) maps are used as input for object detection. Since the use of range–Doppler data as the input data for traffic scene target detection is a new application scenario, there are few researchers working in this field, and there are few public radar tensor datasets. Fortunately, some foreign scholars have released related datasets, such as CARRADA, CRUW, and RADDet. This article uses the RADDET dataset [5] and the CARRADA dataset [6].
Target detection based on radar data is divided into two types; one is the traditional radar target detection method, based on constant false alarm rate (CFAR) detection and related variants, and the other is the currently more popular deep-learning-based target detection method. The constant false alarm detector is based on statistical theory [7] and judges whether the target exists through hypothesis testing. The specific implementation method is to set a threshold through the probability statistics method, set the sliding window to slide once on the range–Doppler spectrum, and compare the signal peak value of the sliding window with the previously set threshold. If the signal value is higher than the threshold, it is determined that there is a target at the current position; otherwise, it is considered that there is no target. CFAR detectors and their variants are usually an improvement in how the threshold is derived. Recently, deep-learning-based methods have become popular because of their good performance in high-dimensional and big-data processing. Using its achievements in image recognition and target detection [8,9], deep learning has been initially explored in radar target recognition and detection [10]. In this paper, we propose a new deep learning model for object detection and classification in complex traffic scenes via range–Doppler maps (RD maps). The model uses YOLOv8 as the basic skeleton, innovatively adds the ConvLSTM structure with a timing prediction function, and introduces the attention mechanism to capture key information to adapt to the complexity, timing of radar data, and small-scale features of radar objects, and improve the accuracy and real-time performance of detection as much as possible. Experiments on RADDet and CARRADA datasets show that our model can help improve object detection and classification performance on radar data and outperform other state-of-the-art detection methods. The contributions of this paper are mainly the following four points:
  • According to the characteristics of the dataset itself, due to the time–spatial relationship between the consecutive range–Doppler spectral frames in the dataset, we innovatively designed a backbone network (ConvLSTM) with a time series prediction function to enhance the ability of the network to extract time series features;
  • For the characteristics of the range–Doppler spectrum, since the targets in the RD spectrum are very small, they are easily lost in the network detection process, so we introduce an improved lightweight and efficient channel attention mechanism (ECA) in the backbone and feature fusion parts, which improves the ability of the network to focus on key features;
  • The feature extraction network that combines timing prediction and attention enhancement shows good generalization ability in radar target detection tasks, and the detection performance on public datasets CARRADA and RADDet is better than other algorithms.
The organization of this paper is as follows. Section 2 introduces the basics of radar processing and some background work related to object detection. Section 3 then introduces our model. Experiments and results are collected in Section 4, while Section 5 discusses and concludes the paper.

2. Background and Related Work

2.1. Radar Sensor-Related Knowledge

Millimeter-wave radar is a radar system that uses millimeter waves for detection and imaging. Millimeter waves have strong penetrating power and high resolution, and can detect in environments such as bad weather and low illumination, so they are widely used. Radar target detection compares the emitted radar wave with the signal reflected by the target, and extracts the position, speed, size, and other information of the target from it for the subsequent detection process after processing.
One signal waveform commonly used in millimeter-wave radar sensors is frequency-modulated continuous wave (FMCW). The FMCW radar sends M chirp signals through the transmitting antenna, the sampling number of each chirp signal is N, and K receiving antennas will receive the return signal at the same time. The mixer will mix the received signal with the transmitted signal to obtain the intermediate frequency signal (IF), which produces an N × M × K output tensor containing the received signal in the time domain. We refer to this tensor as an analog-to-digital conversion (ADC) signal. As shown in Figure 1, the distance information is extracted by performing a fast Fourier transform (FFT) on the ADC sample data in the distance dimension in a chirp, and then the distance Doppler spectrum (RD map) is obtained after an FFT in the velocity dimension. Finally, performing an FFT in the angle dimension to extract the angle information yields a range–angle–Doppler data tensor (or RAD cube). Because the amount of RAD cube data is too large, the subsequent calculation tasks are heavy, so this paper only uses the RD spectrum obtained by ADC data processing for target detection.

2.2. Traditional Approach Object Detection

For radar data, traditional target detection methods are mainly constant false alarm rate (CFAR) detection and improved variants. For a given clutter background, CFAR detection requires the evaluation of a suitable adaptive threshold, which is usually determined by estimating the clutter power of reference cells adjacent to the cell under test (CUT). However, the clutter statistics are unknown a priori and their estimation may be affected by the presence of interfering target echoes or the presence of sudden power transitions within the reference window. In this case, the detection performance of the radar system may suffer from a large decrease in the probability of detection (Pd) or a large increase in the probability of false alarm (Pfa). In fact, the presence of strong interfering targets will increase the detection threshold, thereby covering up weak primary targets, while the clutter edges will reduce the detection threshold, which will lead to an excessive false alarm rate when the CUT is in a high-power region. To overcome these situations and improve the detection performance, the most popular in the early stage is the Cell Average False Alarm Rate (CA-CFAR) detector [11,12]. It compares the energy of the detected unit, the CUT, with the threshold value obtained from the reference cell (RC) to make a judgment on whether the target exists or not. However, in multi-object detection, CA-CFAR tends to produce a masking effect [13], where weak objects may be ignored due to the large threshold caused by strong objects. Both the largest CFAR (GO-CFAR) [14] and smallest CFAR (SO-CFAR) [15] detectors are variants of CA-CFAR that split the reference window into spatial subsets before averaging. GO-CFAR is trained on the largest subset of clutter, while SO-CFAR uses the smallest. On the other hand, Variability Index CFAR (VI-CFAR) [16] adaptively selects a specific set of reference pixels to estimate background statistics. VI-CFAR is expected to degrade performance when the clutter distribution is complex and cannot be modeled by simple spatial partitioning. Ordered Statistical CFAR (OS-CFAR) [17] is an alternative to radar detection that obtains thresholds by sorting data samples and has better performance in cluttered edges and multi-object scenarios. Compared with CA-CFAR, OS-CFAR has the disadvantages of less detection rate loss with homogeneous clutter and a high computational cost. As a generalization of the OS-CFAR detector, Trimmed Mean CFAR (TM-CFAR) [18] estimates distribution parameters by averaging a set of ranked values. It should be noted that the above detection methods all correspond to one or more specific clutter distribution scenarios. However, realistic clutter is often difficult to predict, which often leads to poor detection performance of conventional CFAR detectors. Similar to the radar range–Doppler spectrum, the RD spectrum is susceptible to noise interference in the traditional detection process, which affects performance. Likewise, hyperspectral images suffer greatly from traditional processing methods. Therefore, some scholars use the mixed spatial attraction model (MSAM) based on linear Euclidean distance to obtain spatial correlation to better process images [19]. Other scholars proposed a new method called target-constrained interference minimization base station (TCIMBS), which can be used to select a subset of frequency bands for specific target detection while eliminating uninteresting targets and suppressing interference and background [20]. Some scholars have proposed an idea combining super sharpening and MS panchromatic sharpening technology for image fusion [21], which also provides a clear idea for us to explore new methods based on traditional methods.

2.3. Deep Learning Object Detection

With the continuous development of deep learning, target detection, as one of the tasks in the field of computer vision, is also undergoing iterative updates with each passing day. Currently, there are two mainstream algorithms for target detection: one is the one-stage detection algorithm represented by the YOLO algorithm, and the other is the two-stage detection algorithm represented by Faster-RCNN. Among them, the RCNN network proposed by Ross Girshick et al. is the originator of deep learning target detection papers. Considering that the generalization and robustness of features extracted by traditional image processing methods are not strong, convolutional neural networks are used to extract features [22]. Later, the improved Fast-RCNN and Faster-RCNN network models based on RCNN were successively introduced, but these models are all two-stage detection networks, mainly for identifying and positioning the region proposal. Although the detection accuracy of this type of method is very high, because a separate network is required to extract the region proposal, it cannot break through the bottleneck in terms of speed, and it takes a lot of time to train and reason [23,24]. While the RCNN series was developing, Joseph Redmon and others proposed the first version of the You Only Look Once series for the first time, creating a precedent for one-stage detection algorithms. It regards the detection problem as a regression problem, using only one neural network to predict the location and category of the bounding box at the same time, so it is very fast, but the accuracy is not as good as the RCNN series [25]. Subsequently, a series of optimization strategy improvements were made for YOLOv1, and YOLOv2 [26] was launched. YOLOv3 adjusted the network structure for YOLOv2, using the Darknet-53 network structure, using multi-scale features for object detection and object classification with a logistic function instead of softmax, and achieved a good balance between detection speed and accuracy [27]. YOLOv4 uses Mosaic data enhancement, an anchor offset mechanism, a positive and negative sample matching strategy, and an improvement-of-loss function, and has achieved better detection results [28]. Compared with the previous version, the improvement of YOLOv5 mainly lies in the processing mechanism of the anchor, which can accelerate the convergence speed of the network model [29]. YOLOv6 is mainly used in industrial applications of target detection [30]. YOLOv7 tries to make the YOLO algorithm faster and better, while being able to support mobile GPU devices from the edge to the cloud [31].
These algorithms are based on a general framework for the detection and recognition of light-sensing images, and the pre-trained weights of the model are also suitable for most image datasets. Notably, while the image detection algorithm is developing, the corresponding datasets are gradually expanded, such as the ImageNet dataset [32], the COCO dataset [33], etc. These are target detections based on RGB images, which provide a data basis for the development of target detection.
Many researchers have also made efforts and contributions to apply deep-learning-based object detection algorithms to radar data of traffic scenes. Hsu Hao Wei et al. proposed a deep-learning-based convolutional neural network to reconstruct the RD map, so that the reconstructed RD map can be close to the RD map under FR [34]. Su et al. proposed a maritime target detection method based on radar signal graph data and graph convolution to apply graph-structured data to define detection units and represent temporal and spatial information of detection units [35]. Wang Chenxing et al. proposed a DL-based pulse Doppler radar UAV detection method [36]. He Jing et al. proposed a radar target detection method based on multi-task learning in a heterogeneous environment [37]. Wen Liwu proposed a two-step detection framework, in which feature differences and inter-frame correlations between moving objects and sea clutter are exploited for intra- and inter-frame detection, respectively [38]. Zheng Qiangwen proposed an object detection scheme based on distributed 1D CA-CFAR and region of interest (ROI) preprocessing [39]. RODNet proposed by Wang Yizhou et al. takes a small segment of an RF image as input to predict objects [40]. Wang Guohua et al. used the U-Net network for training and prediction [41]. Rodrigo Pérez et al. applied the real-time target detection system YOLO to preprocess the radar range–Doppler–angle power spectrum [42]. Roberto Franceschi et al. proposed a convolutional neural network-based model that can detect and localize targets in a multidimensional space of distance, velocity, azimuth, and elevation [43].
We were pleasantly surprised to find that Colin Decourt et al. proposed an end-to-end trainable architecture of hybrid convolution and ConvLSTM to learn the spatiotemporal dependencies between consecutive frames [44]. Lin Zhihui et al. proposed a novel self-attentional memory (SAM) to memorize features with long-range dependencies in both spatial and temporal domains [45]. Song Hongmei et al. further enhanced DB-ConvLSTM [46] with a PDC-like structure by employing several extended DB-ConvLSTMs to extract multi-scale spatiotemporal information. Zahidul Islam et al. proposed an efficient two-stream deep learning architecture LSTM (SepConvLSTM) using separable convolution and a pre-trained MobileNet network [47]. This provides us with effective ideas.
In the traffic scene, the radar data format such as the range–Doppler spectrum (RD maps) is obtained frame by frame and step by step, which has a certain timing. Therefore, this paper introduces a timing prediction function, the ConvLSTM model, into the YOLOv8 framework, so that the timing performance of radar data plays an auxiliary role in the network training and learning process. In addition, the attention mechanism is introduced to extract the features that the network considers to be important parts to improve the accuracy of detection.

3. Object Detection Method Based on Radar Range–Doppler Spectrum

The model we propose is an improved end-to-end trainable neural network based on the YOLOv8 model, which is mainly aimed at the improvement of the backbone network and the optimization of other structures. First, due to the traffic scene, the radar range–Doppler dataset has time series characteristics. Each frame in the dataset has a time–spatial connection with the preceding and preceding frames, and our network can easily capture and identify targets that are invisible to the naked eye through the temporal memory between the preceding and preceding frames. Therefore, for the backbone network, we innovatively introduced the ConvLSTM network with timing prediction capability and simplified its original structure so that it can be applied to the output of the upper-level CSP module. It also adapts its output to the input of the subsequent network. Second, due to the small and hidden characteristics of the target in the range–Doppler spectrum, it is difficult for the general target detection network to adapt to the detection of small targets in the RD spectrum. Each target only occupies a few pixels in the RD spectrum, and small targets such as pedestrians are even close to the point target, which is difficult for human eyes to distinguish. Therefore, by improving the lightweight and efficient ECA attention module, it is used to enhance the network feature extraction ability and the ability to focus on key features. We place it behind each CSP module in the backbone network and after the CSP module in the feature fusion area to enhance the feature extraction ability for small targets. In this section, we first preprocess the radar data through technical means so that it can match the input format requirements of our model, then introduce the model improvement method in detail, and finally introduce the experimental equipment for network training and inference.

3.1. Data Preprocessing

Before starting a new project, the first course of action is to process the original data to make it meet the input format requirements of the project. In this paper, we first perform dimensionality reduction processing on the 3D ADC cube complex data of the original dataset by summing the diagonal dimension tensors, and we only extract the information of the distance and Doppler dimensions. In order to save computing resources and speed up network training, we process complex data into real data, and finally obtain visualized RD spectra in batches. Since the tags in the original data are three-dimensional tags, after the radar data are dimensionally reduced, the tags of the corresponding objects are also processed into two-dimensional coordinate information. We organize the processed RD graph and corresponding labels into a new dataset format as input, and then send it to the network for training.

3.2. Object Detection Network Model

In this section, we propose a network architecture for object detection in range–Doppler spectra. Given the processed RD map as an input, a YOLOv8 network is used as the basic framework to design a network with time-series feature enhancement learning in the backbone. After an improved feature fusion layer, three YOLO heads are used for output. Based on the original YOLO v8, our model improves the detection performance in small target and complex background environments. The network structure diagram we designed is shown in Figure 2.

3.2.1. YOLOv8 Algorithm Description

The model structure of YOLOv8 can be divided into three parts, namely the spine, neck, and head. Its network flowchart is shown in Figure 3. When the image is input, the backbone network mainly performs feature extraction. First, it performs a 2D convolution and then performs a 2D convolution to connect it with the CSP module through the residual structure, repeats this operation four times, and then achieves an adaptive size output through the SPPF module. The activation functions used here are all SiLU. SiLU has no upper bound, only a non-monotonic lower bound. It still maintains good performance on deep networks, which is conducive to improving the fitting effect of the model by increasing the depth. In order to fuse feature information of different scales, the neck will use three feature maps of different sizes extracted by the backbone for feature fusion. The deep feature layer is sequentially up-sampled and spliced to obtain a new feature layer, and then the shallow feature layer is sequentially down-sampled for the second splicing to obtain the final feature fusion layer. This can fully combine shallow features and deep features, make full use of semantic information, and speed up the efficiency of information dissemination. The head of the model uses the current mainstream decoupling head structure, which separates the classification and detection heads, and replaces Anchor-Based with Anchor-Free. In terms of Loss calculation, the TaskAlignedAssigner positive and negative sample matching strategy is adopted, and positive samples are selected according to the weighted scores of classification and regression scores. The Loss calculation part includes two branches: the classification and regression branches. There is no previous objectness branch. The classification branch still uses BCE Loss, and the regression branch needs to be bound to the integral representation proposed in Distribution Focal Loss, so Distribution Focal is used. Loss also uses CIoU Loss. The three Loss calculations are weighted with a certain weight ratio.

3.2.2. ConvLSTM

ConvLSTM is a neural network architecture combining a nonlinear convolutional neural network (CNN) and a long short-term memory network (LSTM), which is designed to process data with spatiotemporal information, such as video analysis or time series signals. Its basic idea is to replace the memory module of LSTM with external operations, taking advantage of CNN’s advantages in processing spatial data such as images and videos. Its structure consists of an outer layer, an LSTM layer, and an output layer. At each time step, the input data are a three-dimensional vector (including the number of samples, width, and height), which is processed through the outer layers and then passed to the LSTM layer. Each unit in the LSTM layer contains a memory unit and three gating units—input gate, forget gate, and output gate—and these gating units can control which information should be memorized, forgotten, and output in order to better process the time series data. Finally, the output layer converts the output of the LSTM layer into the desired output form. The basic mathematical model of ConvLSTM can be expressed as follows:
i t = σ ( W x i X t + W h i H t 1 + W c i   o   C t 1 + b i )
f t = σ ( W x f X t + W h f H t 1 + W c f   o   C t 1 + b f )
C t = f t   o   C t 1 + i t   o   tanh ( W x c X t + W h c H t 1 + b c )
o t = σ ( W x o X t + W h o H t 1 + W c o   o   C t + b o )
H t = o t   o   tanh ( C t )
The structure diagram of ConvLSTM is shown in Figure 4:
Among them, i t , f t , and o t represent the input gate, forget gate, and output gate, respectively; X t , H t 1 , and C t 1 represent the input of the current cell, the output, and the state of the last cell, respectively; W and b represent the convolution kernel and bias; and σ is a function that assumes the shape of s . ConvLSTM changes the multiplication in the original LSTM to a convolution operation.
Since ConvLSTM can be used for video data captured by cameras, the datasets produced by radar RD spectra in real traffic scenes have continuity in time and space. Therefore, ConvLSTM can model the temporal dependence in continuous RD data and capture spatial information through convolution operations. This paper improves the original backbone framework of YOLOv8, introduces the ConvLSTM structure before the shallow feature fusion output layer, strengthens the feature extraction ability of the backbone network for RD time series data, and introduces the memory function. It should be noted that before the introduction of ConvLSTM, the output of the network needs to be converted into the data dimension that ConvLSTM allows for input. After passing through ConvLSTM, the output of the network needs to be converted into a data dimension that CNN can accept. This experiment shows that ConvLSTM is very effective for modeling time series data.

3.2.3. Add Efficient Channel Attention (ECA) Module

The attention mechanism originated from the study of human vision, and the purpose is to make the model selectively focus on specific parts. In a recent study, a lightweight channel attention mechanism ECANet [48] was proposed, which obtained a local cross-channel interaction without dimensionality reduction under the premise of increasing the model complexity as little as possible, and it greatly improved the model performance. The model structure diagram of ECA is shown in Figure 5. It mainly improves the channel attention of SENet, removes the fully connected layer in SENet, replaces it with a 1 × 1 convolution kernel for feature processing, and determines the cross-channel interaction through the convolution kernel size k of a 1D convolution scope of information. This not only reduces the model parameters, but also makes the model lighter.
There are three feature output layers before FPN feature fusion and after up-sampling during feature fusion. The ECA attention mechanism is added to strengthen the network’s focus on channels. This processing method can help the network emphasize important features while suppressing irrelevant features during feature fusion and splicing. This improvement is beneficial to suppress the noise interference caused by the complex background in the RD image.

4. Experiments

The proposed model approach has been validated on real radar datasets of RADDet and CARRADA traffic scenes. For the completeness of experiments, and also to evaluate our model, in this section, we contrast and evaluate the proposed method with current popular object detection methods. The comparison methods include the two-stage detection algorithm Faster-RCNN, and the one-stage detection algorithms YOLOv3, YOLOv5, YOLOv7, YOLOv7-tiny, and YOLOv8.

4.1. Experimental Data

Our experimental data come from the public dataset RADDet provided by researchers at the University of Ottawa in Canada and the CRRADA dataset provided by ArthurOuaknine et al. from McGill University. Among them, the RADDet dataset provides six traffic scene categories, namely person, bicycle, car, truck, motorcycle, and bus. The CARRADA dataset provides three categories, namely pedestrian, cyclist, and car. The corresponding numbers of specific categories can be seen in Table 1. The original RADDet and CARRADA datasets provide 3D radar complex data (ADC cube) with dimensions of 256 × 256 × 64, which will cause a serious computational burden during network training. Therefore, we preprocess the ADC data into two-dimensional real data. Figure 6 shows an RD spectrum in the RADDet dataset and the targets in it. Since there is a timing prediction module, ConvLSTM, in the network, it requires data input in the form of samples, one of which contains 10 frames of RD images, so the dataset should be delivered in the form of multiple samples. In this way, the learning ability of the network for time series prediction feature extraction can be maximized. We randomly divided the data into training set and testing set according to the ratio of 9:1, and randomly selected 10% of the training set as the validation set. In order to increase the reliability of the experimental results, we re-divided the distribution ratio of the dataset, randomly divided the original data into the training set and the test machine according to the ratio of 7:3, and randomly selected 10% of the training set as the verification set. The specific number can be seen in Table 1.

4.2. Evaluation Indicators

We evaluate our models using mean precision (mAP), a well-known metric for evaluating object detectors, providing both precision and recall at intersection-over-union (IOU) thresholds of 0.3 and 0.5. Their specific calculation methods are as follows:
  • Accuracy: Refers to the probability of detecting the correct value among all detected targets:
    Precision = T P T P + F P
  • Recall rate: Refers to the probability of correct identification in all positive samples:
    Recall = T P T P + F N
  • AP: Refers to the average value of the detector in each Recall case, corresponding to the area under the PR curve:
    AP = 0 1 P ( r ) d r
  • mAP: The average evaluation of AP from the category dimension, so the performance of multi-classifiers can be evaluated:
    mAP = 1 C j c A P j

4.3. Comparison Method and Training Details

In order to evaluate our proposed model, we choose other current mainstream object detection methods for comparison. The comparison methods mainly include Faster RCNN, YOLOv5, YOLOv7, YOLOv7-tiny, YOLOv8, and the RADDet method and DAROD method proposed by other researchers. The data of all models adopt the RD graph provided by the preprocessed RADDet and CARRADA datasets and use the pre-trained weights provided by the model for training.
As shown in Table 2 and Table 3, we provide settings about the experimental environment as well as model parameters. For all subsequent experiments, unless otherwise specified, the parameters shown in the table shall prevail. All models are trained for 300 epochs. In order to speed up the training, we freeze the training of the backbone network part in the first 50 rounds, and set the batch size to 4. We use the Adam optimizer, the β coefficient is set to 0.9, the weight decay is 5 × 10−4 (we train both models with Adam optimizer with betas 0.9, 0.999, weight decay 5 × 10−4), the maximum learning rate of the model is set to 1 × 10−3, the minimum learning rate is set to 1 × 10−5, and the learning rate changes according to the mathematical law of cosines. This experimental setup applies to all comparison methods in this paper. All experiments use the Pytorch deep learning framework and NVIDIA GeForce RTX 2080Ti GPU for network training and testing.

4.4. Analysis of Results

Figure 7 shows the comparison results of mainstream methods in the field of target detection on the RADDet dataset. It is not difficult to find that, except for our model, other methods have missed detection, and the confidence of detecting objects is not high. One thing that stands out about YOLOv5 is that it can detect cars that other models cannot, except ours. Surprisingly, it misses the bus target and does not deliver a satisfactory result. Figure 8 shows the comparative results of various object detection methods on the CARRADA dataset. Due to the small number of categories in the CARRADA dataset, there are only one or two targets in each range–Doppler spectrum, and there are fewer occlusions between objects, so there are fewer missed detections. However, Faster RCNN still misses detection, which may be related to the detection performance of the algorithm. For the YOLO series of algorithms, objects can basically be detected, but there are differences in the detection confidence. At a glance, our model has the best detection performance.
Table 4 and Table 5 show the performance of several different object detection methods on the RADDet and CARRADA datasets under the premise of dividing the dataset at a ratio of 9:1 and 7:3. In particular, since the time spent in each training epoch is similar during the model training process, the training time and testing time in Table 4 and Table 5 are based on the time spent by each model in a training epoch. The best results are highlighted in bold, and the second-best results are underlined. For a more intuitive observation, Figure 9 shows the mAP of each model when the IOU is 0.3 and 0.5. It can be seen from the figure and table that among the seven target detection methods, the performance of our proposed model on the two datasets is significantly better than other methods in most cases, and it remains competitive with the YOLOv7 method on the RADDet dataset. The CARRADA dataset maintains competition with Faster RCNN, but there is also a big gap. The above results are all obtained by taking the best weight for detection after the training.
Overall, our model achieves the best mAP on both datasets and far exceeds the mAP of other modeling methods. Although it may not be the best in terms of detection precision and recall compared to other methods, the precision and recall of our model are not far behind the best results of other models. Moreover, mAP can measure the overall performance of the entire model detection. It does not mean that the model is good if a single accuracy or recall rate is good.
Specifically, on the RADDet dataset, our model achieved the best results when the IOU was 0.3 and 0.5, but the detection accuracy was not as good as that of the YOLOv8 model. But the gap is not that big, and the recall of our model is much higher than YOLOv8. There are gains and losses. Our model’s FPS (frames per second) is lower than YOLOv8, and GFLOPS (floating-point operations) is higher. Compared with YOLOv7, our model index is overall higher than YOLOv7. Our model has the best overall performance, and YOLOv7′s overall performance ranks second, but its FPS performance is better, which may be more complicated than our model, so the calculation speed and inference speed are slightly reduced. For the time complexity, we comprehensively consider the training time of each model for one cycle. When the ratio of the training set to the test set is 9:1, on the two datasets, the YOLOv5 model requires the least training time and the time complexity is low, followed by the lightweight YOLOv7-tiny model. Because our model is improved based on the YOLOv8 structure, the model is more complex, so the required training time is also slightly longer, but it is less complex than the YOLOv8 model. In terms of model reasoning, YOLOv7-tiny is the fastest on the RADDet dataset, followed by YOLOv5; on the CARRADA dataset, YOLOv5 is the fastest, followed by our model. When the ratio of the training set to the test set in the dataset is 7:3, the fastest training time is still YOLOv5, followed by YOLOv7-tiny, and our model ranks third. In terms of reasoning, YOLOv5 reasoning is the fastest on the RADDet dataset, followed by DAROD, and our model ranks third. The fastest inference on the CARRADA dataset is still YOLOv5, followed by YOLOv7-tiny, and our model ranks third. Overall, although the time complexity of our model is not the lowest, it is slightly higher than both YOLOv5 and YOLOv7. In terms of overall performance, our model detection effect is the best.
On the CARRADA dataset, the overall performance of our model is still the best. Although the performance of Faster RCNN is also good, the two models can compete with each other, but overall, the mAP, accuracy, model parameters, and FPS of our model are better than Faster RCNN, and it can detect the target to the greatest extent. Faster RCNN has higher recall, which leads to more false positives but fewer missed objects.
In order to improve the reliability of the experimental data, we also counted the movement speed of each category in the two datasets, and then analyzed the adaptability of our model to targets with different movement speeds. Figure 10a,b show the velocity distribution statistics of all targets in the RADDet and CARRADA datasets, respectively. We used the Doppler dimension label data for analysis and statistics and calculated the raw velocity of the object motion based on the Doppler information given by each target label. From Figure 10, we can know that the speed of the moving target in the traffic scene is roughly between −13.5 m/s and 13.5 m/s.
According to the speed information provided in Figure 10, combined with the detection results of each model in Figure 10, it can be seen that in the traffic road scene, for moving objects or stationary objects with different speeds, our proposed model outperforms other models in terms of overall performance.
For the two models of RADDet and DAROD, their mAP, accuracy, and recall are not outstanding, but the advantage is that the model parameters are small and the GFLOPs are very small, conducive to model deployment. However, our model and other models still need to be improved in terms of deployment.
In conclusion, compared with other models, our model achieves better mAP and detection accuracy on two datasets, medium recall rate, and medium model parameters, GFLOPS and FPS. This shows that our model is more accurate in the target detection task. However, due to the attributes of RD data, it is difficult for us to detect all targets, and there are still missed and false detections.

4.5. Ablation Experiments

In this subsection, in order to know exactly the contribution of each module in our model, we will briefly introduce the performance exhibited by different network modules. Table 6 and Table 7 list the results obtained by different modules on the RADDet and CARRADA datasets. Figure 11 and Figure 12 show the prediction results of different modules on the RADDet and CARRADA datasets. In the figures, “YOLOv8” represents the original model architecture, “ConvLSTM” represents the network that only introduces the ConvLSTM module in the YOLOv8 backbone network, “ECA” represents the network that introduces the ECA attention mechanism in the three feature fusion layers output by the backbone network, and “Ours” represents the model structure we proposed. Figure 13 shows the line chart of the mAP performance of different models, which illustrates the effectiveness of our proposed model.
Our model is mainly based on the improvement of YOLOv8, and ConvLSTM and ECA are the main modules that play a role. Specifically, it can be seen from Table 6 and Table 7 that the improvement of YOLOv8 by ConvLSTM and ECA is roughly similar, but ECA can be improved with a small change in the parameters of YOLOv8 and FPS, which improves the model’s attention to key features. ConvLSTM slightly increases the number of parameters of the original model and reduces the FPS, because ConvLSTM introduces additional parameters when it plays a role in memory of time series data. It can be seen from Figure 12 that YOLOv8 has missed detection. When the same ECA works alone, it still does not detect the missed detection target, and only improves the accuracy of the detected target. When ConvLSTM works alone, the missed detection target can be detected, and the detection confidence is generally improved, which shows that ConvLSTM has exerted the ability of time series feature extraction. Combining Figure 11, Figure 12 and Figure 13, it can be seen that when only ECA and ConvLSTM work alone, the target detection accuracy in the two datasets is not high, even lower than the YOLOv8 model, which reduces the detection accuracy of the original model. Our model combines the advantages of ECA and ConvLSTM. It not only utilizes timing to reduce the missed detection rate, but also improves the accuracy of object detection, enabling the model to maximize its performance. When changing the distribution ratio of the training set and the verification set in the dataset, from the original 9:1 to 7:3, that is, when the training data ratio decreases, the changes in each model are the same, which fully shows that our model is very reliable.
In order to improve the generality of the model, compared with Figure 12 and Figure 13, Figure 14 and Figure 15 show the target detection of the radar device placed on another traffic road in the RADDet dataset and the CARRADA dataset, respectively. From the detection comparison results in the four pictures, it can be seen that for target detection in different scenarios, the first three models will have the disadvantages of missed detection or low accuracy. Our model can always combine the timing of ConvLSTM and the accuracy of ECA to achieve the best detection effect and the highest detection accuracy, and greatly improve the detection accuracy on the basis of the YOLOv8 model.
In short, ConvLSTM learns timing information during feature extraction, and three ECA attention modules further improve the network feature mining ability. Although the two parts of the modules can improve the network performance when they play their respective roles, the improvement to the network is not large. In most cases, only when these two modules work together can the network perform stably and efficiently. Not only that, the training speed and inference speed of the network are also greatly improved in the case of synergy, which effectively proves the superiority of the proposed method. In summary, ablation experiments show that our proposed network model is beneficial for object detection in radar RD spectra of traffic scenes.

5. Conclusions

In this paper, a deep-learning-based object detection method based on radar range–Doppler spectroscopy is proposed. According to the characteristics of the radar range–Doppler spectrum, we have made some improvements on the basis of the YOLOv8 model. The detection performance of the model is mainly improved by the modification of the backbone network and the optimization of other parts. Compared with other currently known RD map object detection models, our model fully utilizes the temporal information of the traffic scene dataset, and improves the ability of the model to focus on key objects through the attention mechanism. Since there are few references to the radar range–Doppler spectrum dataset, we used two datasets from foreign traffic scenarios, namely RADDet and CARRADA datasets. We processed the raw radar data provided in the dataset and turned them into a visualized range–Doppler spectrum, on which our experiments were based. We compared the current mainstream target detection methods, namely Faster RCNN, YOLOv5, YOLOv7, YOLOv7-tiny, YOLOv8, RADDet, and DAROD, with our proposed model. In order to observe the effectiveness of our model, we split the dataset into a training set and a testing set with ratios of 9:1 and 7:3, respectively. The experimental results show that the improved model not only improves the detection accuracy, but also the model parameters and complexity are at a reasonable level. The results of inference on the range–Doppler spectra in the test set show that our proposed model has better detection performance and robustness compared to other methods with a low detection rate and missed detection stickiness. The experiments prove the effectiveness and feasibility of our proposed method, conducive to the smooth progress of object detection in actual traffic scenes. However, our proposed method increases the number of parameters and model complexity, making it difficult to deploy, and we hope to improve it in future work. At the same time, we will continue to pay attention to the characteristics of the target in the radar range–Doppler spectrum and propose a more targeted optimization strategy, collect data on the basis of perfect basic radar equipment, further expand the dataset, and involve more target types.

Author Contributions

Conceptualization, F.J. and J.T.; investigation, J.T. and X.L.; methodology, F.J. and J.T.; writing—original draft and formal analysis, F.J. and J.T.; writing—review and editing, J.Q. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grants 62001064).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the reviewers and editor for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gharineiat, Z.; Tarsha Kurdi, F.; Campbell, G. Review of Automatic Processing of Topography and Surface Feature Identification LiDAR Data Using Machine Learning Techniques. Remote Sens. 2022, 14, 4685. [Google Scholar] [CrossRef]
  2. Alaba, S.Y.; Ball, J.E. A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving. Sensors 2022, 22, 9577. [Google Scholar] [CrossRef]
  3. Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2022; pp. 4490–4499. [Google Scholar]
  4. Decourt, C.; VanRullen, R.; Salle, D. DAROD: A Deep Automotive Radar Object Detector on Range-Doppler maps. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium, Aachen, Germany, 5–9 June 2022; pp. 112–118. [Google Scholar]
  5. Zhang, A.; Nowruzi, F.E.; Laganiere, R. RADDet: Range-Azimuth-Doppler based radar object detection for dynamic road users. In Proceedings of the 2021 18th Conference on Robots and Vision, Beijing, China, 18–22 August 2021; pp. 95–102. [Google Scholar]
  6. Ouaknine, A.; Newson, A.; Rebut, J.; Tupin, F.; Pérez, P. CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler Annotations. In Proceedings of the 2020 25th International Conference on Pattern Recognition, Milan, Italy, 13–18 September 2020; pp. 5068–5075. [Google Scholar]
  7. Liu, Y.; Zhang, S.; Suo, J.; Zhang, J.; Yao, T. Research on a new comprehensive CFAR (comp-CFAR) processing method. IEEE Access 2019, 7, 19401–19413. [Google Scholar] [CrossRef]
  8. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  9. Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
  10. Samaras, S.; Diamantidou, E.; Ataloglou, D.; Sakellariou, N.; Vafeiadis, A.; Magoulianitis, V.; Lalas, A.; Dimou, A.; Zarpalas, D.; Votis, K.; et al. Deep Learning on Multi Sensor Data for Counter UAV Applications—A Systematic Review. Sensors 2019, 19, 4837. [Google Scholar] [CrossRef]
  11. Kronauge, M.; Rohling, H. Fast two-dimensional CFAR procedure. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1817–1823. [Google Scholar] [CrossRef]
  12. El-Darymli, K.; McGuire, P.; Power, D.; Moloney, C. Target detection in synthetic aperture radar imagery: A state-of-the-art survey. Remote Sens. 2013, 7, 071598. [Google Scholar]
  13. Kulpa, K.S.; Czekała, Z. Masking effect and its removal in PCL radar. IEE Proc.-Radar Sonar Navig. 2005, 152, 174–178. [Google Scholar] [CrossRef]
  14. Hansen, V.; Sawyers, J. Detectability loss due to “greatest of” selection in a cell-averaging CFAR. IEEE Trans. Aerosp. Electron. Syst. 1980, 16, 115–118. [Google Scholar] [CrossRef]
  15. Trunk, G. Range resolution of targets using automatic detectors. IEEE Trans. Aerosp. Electron. Syst. 1978, 14, 750–755. [Google Scholar] [CrossRef]
  16. Smith, M.; Varshney, P. Intelligent CFAR processor based on data variability. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 837–847. [Google Scholar] [CrossRef]
  17. Blake, S. OS-CFAR theory for multiple targets and nonuniform clutter. IEEE Trans. Aerosp. Electron. Syst. 1988, 24, 785–790. [Google Scholar] [CrossRef]
  18. Gandhi, P.; Kassam, S. Analysis of CFAR processors in homogeneous background. IEEE Trans. Aerosp. Electron. Syst. 1988, 24, 427–445. [Google Scholar] [CrossRef]
  19. Wang, P.; Wang, L.; Leung, H.; Zhang, G. Super-resolution mapping based on spatial–spectral correlation for spectral imagery. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2256–2268. [Google Scholar] [CrossRef]
  20. Shang, X.; Song, M.; Wang, Y.; Yu, C.; Yu, H.; Li, F.; Chang, C.I. Target-constrained interference-minimized band selection for hyperspectral target detection. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6044–6064. [Google Scholar] [CrossRef]
  21. Lu, X.; Zhang, J.; Yang, D.; Xu, L.; Jia, F. Cascaded Convolutional Neural Network-Based Hyperspectral Image Resolution Enhancement via an Auxiliary Panchromatic Image. IEEE Trans. Image Process. 2021, 30, 6815–6828. [Google Scholar] [CrossRef]
  22. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  23. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 11–18 December 2015; pp. 1440–1448. [Google Scholar]
  24. Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
  25. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  26. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
  27. Redmon, J.; Farhadi, A.J. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  28. Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M.J. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  29. Zhu, X.; Lyu, S.; Wang, X. TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2778–2788. [Google Scholar]
  30. Li, C.; Li, L.; Jiang, H. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
  31. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  32. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  33. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
  34. Hsu, H.W.; Lin, Y.C.; Lee, M.C.; Lin, C.H.; Lee, T.S. Deep learning-based range-doppler map reconstruction in automotive radar systems. In Proceedings of the IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 25–28 April 2021; pp. 1–7. [Google Scholar]
  35. Su, N.; Chen, X.; Guan, J.; Huang, Y. Maritime target detection based on radar graph data and graph convolutional network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4019705. [Google Scholar] [CrossRef]
  36. Wang, C.; Tian, J.; Cao, J.; Wang, X. Deep learning-based UAV detection in pulse-Doppler radar. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5105612. [Google Scholar] [CrossRef]
  37. Jing, H.; Cheng, Y.; Wu, H.; Wang, H. Radar target detection with multi-task learning in heterogeneous environment. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4021405. [Google Scholar] [CrossRef]
  38. Wen, L.; Ding, J.; Xu, Z. Multiframe detection of sea-surface small target using deep convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5107116. [Google Scholar] [CrossRef]
  39. Zheng, Q.; Yang, L.; Xie, Y.; Li, J.; Hu, T.; Zhu, J.; Song, C.; Xu, Z. A target detection scheme with decreased complexity and enhanced performance for range-Doppler FMCW radar. IEEE Trans. Instrum. Meas. 2020, 70, 8001113. [Google Scholar] [CrossRef]
  40. Wang, Y.; Jiang, Z.; Li, Y.; Hwang, J.N.; Xing, G.; Liu, H. RODNet: A real-time radar object detection network cross-supervised by camera-radar fused object 3D localization. IEEE J. Sel. Top. Signal Process. 2021, 15, 954–967. [Google Scholar] [CrossRef]
  41. Ng, W.; Wang, G.; Lin, Z.; Dutta, B.J. Range-Doppler detection in automotive radar with deep learning. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19 July 2020; pp. 1–8. [Google Scholar]
  42. Pérez, R.; Schubert, F.; Rasshofer, R.; Biebl, E. Deep learning radar object detection and classification for urban automotive scenarios. In Proceedings of the 2019 Kleinheubach Conference, Kleinheubach, Germany, 4 November 2019; pp. 1–4. [Google Scholar]
  43. Franceschi, R.; Rachkov, D. Deep learning-based radar detector for complex automotive scenarios. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 5–9 June 202; pp. 303–308.
  44. Decourt, C.; VanRullen, R.; Salle, D.; Oberlin, T. A recurrent CNN for online object detection on raw radar frames. arXiv 2022, arXiv:2212.11172. [Google Scholar]
  45. Lin, Z.; Li, M.; Zheng, Z.; Chen, Y.; Yuan, C. Self-attention convlstm for spatiotemporal prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 11531–11538. [Google Scholar]
  46. Song, H.; Wang, W.; Zhao, S.; Shen, J.; Lam, K.M. Pyramid dilated deeper convlstm for video salient object detection. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 715–731. [Google Scholar]
  47. Islam, Z.; Rukonuzzaman, M.; Ahmed, R.; Kabir, M.H.; Farazi, M. Efficient two-stream network for violence detection using separable convolutional lstm. In Proceedings of the 2021 International Joint Conference on Neural Networks, Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar]
  48. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE/CVF CVPR 2020, Seattle, WA, USA, 14–18 June 2020; pp. 11534–11542. [Google Scholar]
Figure 1. Schematic diagram of radar signal processing and RD spectrum generation (taking the CARRADA dataset as an example).
Figure 1. Schematic diagram of radar signal processing and RD spectrum generation (taking the CARRADA dataset as an example).
Remotesensing 15 04150 g001
Figure 2. Object detection model architecture for range–Doppler maps.
Figure 2. Object detection model architecture for range–Doppler maps.
Remotesensing 15 04150 g002
Figure 3. The original YOLOv8 model.
Figure 3. The original YOLOv8 model.
Remotesensing 15 04150 g003
Figure 4. Schematic diagram of ConvLSTM structure.
Figure 4. Schematic diagram of ConvLSTM structure.
Remotesensing 15 04150 g004
Figure 5. ECA attention mechanism structure diagram.
Figure 5. ECA attention mechanism structure diagram.
Remotesensing 15 04150 g005
Figure 6. RD diagram representation of the RADDet dataset, with bounding boxes around objects and their scaling.
Figure 6. RD diagram representation of the RADDet dataset, with bounding boxes around objects and their scaling.
Remotesensing 15 04150 g006
Figure 7. The comparison chart of the detection effect of different mainstream methods on the RADDet dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the detection effect diagram of the Faster RCNN method, (c) the detection effect of YOLOv5, (d) the detection effect of YOLOv7, (e) the detection effect of YOLOv7-tiny, (f) the detection effect of YOLOv8, (g) the detection result of our proposed model.
Figure 7. The comparison chart of the detection effect of different mainstream methods on the RADDet dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the detection effect diagram of the Faster RCNN method, (c) the detection effect of YOLOv5, (d) the detection effect of YOLOv7, (e) the detection effect of YOLOv7-tiny, (f) the detection effect of YOLOv8, (g) the detection result of our proposed model.
Remotesensing 15 04150 g007
Figure 8. The comparison chart of the detection effects of different mainstream methods on the CARRADA dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the detection effect diagram of the Faster RCNN method, (c) the detection effect of YOLOv5, (d) the detection effect of YOLOv7, (e) the detection effect of YOLOv7-tiny, (f) the detection effect of YOLOv8, (g) the detection result of our proposed model.
Figure 8. The comparison chart of the detection effects of different mainstream methods on the CARRADA dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the detection effect diagram of the Faster RCNN method, (c) the detection effect of YOLOv5, (d) the detection effect of YOLOv7, (e) the detection effect of YOLOv7-tiny, (f) the detection effect of YOLOv8, (g) the detection result of our proposed model.
Remotesensing 15 04150 g008
Figure 9. Two graphs comparing the performance of each model. (a) Indicates the performance of each model when the IOU is 0.3, and the RADDet and CARRADA datasets are divided in ratios of 9:1 and 7:3, respectively. (b) Indicates the performance of each model when the IOU is 0.5, and the RADDet and CARRADA datasets are divided in ratios of 9:1 and 7:3, respectively.
Figure 9. Two graphs comparing the performance of each model. (a) Indicates the performance of each model when the IOU is 0.3, and the RADDet and CARRADA datasets are divided in ratios of 9:1 and 7:3, respectively. (b) Indicates the performance of each model when the IOU is 0.5, and the RADDet and CARRADA datasets are divided in ratios of 9:1 and 7:3, respectively.
Remotesensing 15 04150 g009
Figure 10. Statistical distribution maps of target speed in the datasets. (a) Statistical distribution map of target speed in RADDET dataset; (b) statistical distribution map of target speed in CARRADA dataset.
Figure 10. Statistical distribution maps of target speed in the datasets. (a) Statistical distribution map of target speed in RADDET dataset; (b) statistical distribution map of target speed in CARRADA dataset.
Remotesensing 15 04150 g010
Figure 11. Two graphs comparing the performance of each model. (a) The performance of the comparison model when the IOU is 0.3, and the RADDet and CARRADA datasets are divided in ratios of 9:1 and 7:3, respectively. (b) The performance of the comparison model when the IOU is 0.5, and the RADDet and CARRADA datasets are divided in ratios of 9:1 and 7:3, respectively.
Figure 11. Two graphs comparing the performance of each model. (a) The performance of the comparison model when the IOU is 0.3, and the RADDet and CARRADA datasets are divided in ratios of 9:1 and 7:3, respectively. (b) The performance of the comparison model when the IOU is 0.5, and the RADDet and CARRADA datasets are divided in ratios of 9:1 and 7:3, respectively.
Remotesensing 15 04150 g011
Figure 12. The prediction results of different models on the RADDet dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the prediction graph of the YOLOv8 model, (c) the prediction graph of the LSTM model, (d) the prediction graph of the ECA model, (e) the prediction graph of our proposed model.
Figure 12. The prediction results of different models on the RADDet dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the prediction graph of the YOLOv8 model, (c) the prediction graph of the LSTM model, (d) the prediction graph of the ECA model, (e) the prediction graph of our proposed model.
Remotesensing 15 04150 g012
Figure 13. The prediction results of different models on the CARRADA dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the prediction graph of the YOLOv8 model, (c) the prediction graph of the LSTM model, (d) the prediction graph of the ECA model, (e) the prediction graph of our proposed model.
Figure 13. The prediction results of different models on the CARRADA dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the prediction graph of the YOLOv8 model, (c) the prediction graph of the LSTM model, (d) the prediction graph of the ECA model, (e) the prediction graph of our proposed model.
Remotesensing 15 04150 g013
Figure 14. The prediction results of different models on the RADDet dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the prediction graph of the YOLOv8 model, (c) the prediction graph of the LSTM model, (d) the prediction graph of the ECA model, (e) the prediction graph of our proposed model.
Figure 14. The prediction results of different models on the RADDet dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the prediction graph of the YOLOv8 model, (c) the prediction graph of the LSTM model, (d) the prediction graph of the ECA model, (e) the prediction graph of our proposed model.
Remotesensing 15 04150 g014
Figure 15. The prediction results of different models on the CARRADA dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the prediction graph of the YOLOv8 model, (c) the prediction graph of the LSTM model, (d) the prediction graph of the ECA model, (e) the prediction graph of our proposed model.
Figure 15. The prediction results of different models on the CARRADA dataset. (a) The light-sensing image corresponding to the RD spectrum in the traffic scene, (b) the prediction graph of the YOLOv8 model, (c) the prediction graph of the LSTM model, (d) the prediction graph of the ECA model, (e) the prediction graph of our proposed model.
Remotesensing 15 04150 g015
Table 1. Introduction to the number of categories of the two datasets and the division of the training set, validation set, and test set.
Table 1. Introduction to the number of categories of the two datasets and the division of the training set, validation set, and test set.
DatasetCategoriesQuantityTrain9:1 ValidationTestTrain7:3 ValidationTest
RADDetperson4707
bicycle654
car12,179
truck27648227915101663997113048
motorcycle56
bus154
CARRADApedestrian2908
cyclist1595582664772045315042158
car3375
Table 2. Experimental environment.
Table 2. Experimental environment.
EnvironmentVersions or Model Number
CPUi7-1165G7
GPURTX 2080Ti
OSWindows 10
Python3.6.13
Pytorch1.10.2
Torchvision0.11.3
OpenCV-Python4.1.2.30
Table 3. Experimental parameters.
Table 3. Experimental parameters.
Input SizeOptimizerMomentumBatch SizeEpochLearning RateTraining and Test Set Radio
640 × 640SGD0.93743001 × 10−39:1
640 × 640SGD0.93743001 × 10−37:3
Table 4. Results of different models on RADDet and CARRADA datasets (the dataset division ratio is 9:1).
Table 4. Results of different models on RADDet and CARRADA datasets (the dataset division ratio is 9:1).
DatasetModelmAPIOU 0.3
P
RmAPIOU 0.5
P
RParas
(M)
GFLOPs
(G)
FPS
(ms)
Training
Time (s)
Testing
Time (s)
RADDetFaster RCNN56.4752.1756.9249.5547.7851.7741.360.126.960768
YOLOv551.9875.5733.541.7166.2130.217.116.541.622315
YOLOv769.7685.6945.6858.175.6746.6237.2105.234.445732
YOLOv7tiny64.0783.842.4654.6280.5739.366.013.266.223612
YOLOv867.9494.2629.7657.1391.5629.1325.979.141.129431
RADDet38.4278.229.7722.8760.4120.557.85.013.562154
DAROD65.5682.3147.7846.5768.2338.743.46.839.528620
Ours74.5189.9445.9564.2686.6344.4625.98229.829120
CARRADAFaster RCNN65.0851.772.9761.5647.8667.2141.360.126.959657
YOLOv549.0876.6931.1140.1665.6829.767.116.541.619914
YOLOv770.082.9828.2159.3878.6821.8637.2105.234.344229
YOLOv7tiny64.3786.3226.6655.4677.8834.366.013.266.223020
YOLOv871.0488.0527.3459.0391.2526.6825.979.146.228023
RADDet48.5961.3142.5618.5736.7325.57.85.013.561547
DAROD70.6876.7352.5255.8368.3446.033.46.839.527219
Ours75.6294.5438.2962.5490.5533.4725.98230.625318
Table 5. Results of different models on RADDet and CARRADA datasets (the dataset division ratio is 7:3).
Table 5. Results of different models on RADDet and CARRADA datasets (the dataset division ratio is 7:3).
DatasetModelmAPIOU 0.3
P
RmAPIOU 0.5
P
RParas
(M)
GFLOPs
(G)
FPS
(ms)
Training
Time (s)
Testing
Time (s)
RADDetFaster RCNN53.9950.5355.6749.0147.9250.6541.360.126.963673
YOLOv552.0274.8633.3542.2763.6731.517.116.541.619416
YOLOv767.1982.9646.8357.0173.748.5237.2105.234.442635
YOLOv7tiny62.1883.640.1354.3679.138.966.013.266.226429
YOLOv869.1986.8137.657.183.4439.8625.979.141.129431
RADDet37.577.4228.7922.158.8721.637.85.013.561855
DAROD63.6579.145.1945.3866.837.043.46.839.528123
Ours73.689.1744.2163.6883.0946.5225.98229.828324
CARRADAFaster RCNN66.2146.9869.1561.4445.3863.8641.360.126.958762
YOLOv550.2475.3534.0741.3665.9928.817.116.541.617915
YOLOv767.881.6129.0456.7577.9220.1137.2105.234.339932
YOLOv7tiny62.8981.5724.1654.7174.6130.16.013.266.219018
YOLOv866.775.831.3755.7186.0827.5325.979.146.228225
RADDet46.7260.2740.318.4635.1924.447.85.013.560945
DAROD66.5674.3150.6851.865.3446.623.46.839.525719
Ours71.8192.937.5460.3788.1932.1725.98230.626022
Table 6. Improved experiments based on YOLOv8 (the dataset division ratio is 9:1).
Table 6. Improved experiments based on YOLOv8 (the dataset division ratio is 9:1).
DatasetModelmAPIOU 0.3
P
RmAPIOU 0.5
P
RParas
(M)
GFLOPs
(G)
FPS
(ms)
Training
Time (s)
Testing
Time (s)
RADDetYOLOv867.9494.2629.7657.1391.5629.1325.8679.0841.1229431
ConvLSTM70.3292.3244.560.5993.8636.3625.9282.0129.9928718
ECA70.3395.6741.1260.692.3939.9925.8679.0939.5630019
Ours74.5189.9445.9564.2686.6344.4625.9282.0129.8129120
CARRADAYOLOv871.0488.0527.3459.0391.2526.6825.8679.0846.1728023
ConvLSTM73.9393.3532.5760.068824.5325.9282.0129.9625418
ECA74.0593.0832.2960.7189.7531.0425.8679.0941.6826018
Ours75.6294.5438.2962.5490.5533.4725.9282.0130.6225318
Table 7. Improved experiments based on YOLOv8 (the dataset division ratio is 7:3).
Table 7. Improved experiments based on YOLOv8 (the dataset division ratio is 7:3).
DatasetModelmAPIOU 0.3
P
RmAPIOU 0.5
P
RParas
(M)
GFLOPs
(G)
FPS
(ms)
Training
Time (s)
Testing
Time (s)
RADDetYOLOv869.1986.8137.657.183.4439.8625.8679.0841.1229431
ConvLSTM69.5790.843.0356.2983.6540.2425.9282.0129.9928526
ECA68.3492.0846.5955.0686.0643.5825.8679.0939.5628729
Ours73.689.1744.2163.6883.0946.5225.9282.0129.8128324
CARRADAYOLOv866.775.831.3755.7186.0827.5325.8679.0846.1728225
ConvLSTM69.7989.8230.9156.1386.6523.1925.9282.0129.9624724
ECA71.1690.6633.2958.6291.9130.825.8679.0941.6825620
Ours71.8192.937.5460.3788.1932.1725.9282.0130.6226022
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jia, F.; Tan, J.; Lu, X.; Qian, J. Radar Timing Range–Doppler Spectral Target Detection Based on Attention ConvLSTM in Traffic Scenes. Remote Sens. 2023, 15, 4150. https://doi.org/10.3390/rs15174150

AMA Style

Jia F, Tan J, Lu X, Qian J. Radar Timing Range–Doppler Spectral Target Detection Based on Attention ConvLSTM in Traffic Scenes. Remote Sensing. 2023; 15(17):4150. https://doi.org/10.3390/rs15174150

Chicago/Turabian Style

Jia, Fengde, Jihong Tan, Xiaochen Lu, and Junhui Qian. 2023. "Radar Timing Range–Doppler Spectral Target Detection Based on Attention ConvLSTM in Traffic Scenes" Remote Sensing 15, no. 17: 4150. https://doi.org/10.3390/rs15174150

APA Style

Jia, F., Tan, J., Lu, X., & Qian, J. (2023). Radar Timing Range–Doppler Spectral Target Detection Based on Attention ConvLSTM in Traffic Scenes. Remote Sensing, 15(17), 4150. https://doi.org/10.3390/rs15174150

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop