Next Article in Journal
Multi-Decision Vector Fusion Model for Enhanced Mapping of Aboveground Biomass in Subtropical Forests Integrating Sentinel-1, Sentinel-2, and Airborne LiDAR Data
Previous Article in Journal
Deformation Monitoring Exploration of Different Elevations in Western Sichuan, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review: Radar Remote-Based Gait Identification Methods and Techniques

1
Instituto de Telecomunicações, University of Aveiro, 3810-193 Aveiro, Portugal
2
Department of Electronics, Telecommunications and Informatics, University of Aveiro, 3810-193 Aveiro, Portugal
3
Higher School of Technology and Management of Águeda, University of Aveiro, 3750-127 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(7), 1282; https://doi.org/10.3390/rs17071282
Submission received: 28 February 2025 / Revised: 26 March 2025 / Accepted: 1 April 2025 / Published: 3 April 2025
(This article belongs to the Section Engineering Remote Sensing)

Abstract

:
Human identification using gait as a biometric feature has gained significant attention in recent years, showing notable advancements in medical fields and security. A review of recent developments in remote radar-based gait identification is presented in this article, focusing on the methods used, the classifiers employed, trends and gaps in the literature. Particularly, recent trends highlight the increasing use of Artificial Intelligence (AI) to enhance the extraction and classification of features, while key gaps remain in the area of multi-subject detection. In this paper, we provide a comprehensive review of the techniques used to implement such systems over the past 7 years, including a summary of the scientific publications reviewed. Several key factors are compared to determine the most suitable radar for remote gait-based identification, including accuracy, operating frequency, bandwidth, dataset, range, detection, feature extraction, size and number of features extracted, multiple subject detection, radar modules used, AI used and their properties, and the testing environment. Based on the study, it was determined that Frequency-Modulated Continuous-Wave (FMCW) radars were more accurate than Continuous-Wave (CW) radars and Ultra-Wideband (UWB) radars in this field. Despite the fact that FMCW is the most closely related radar to real-world scenarios, it still has some limitations in terms of multi-subject identification and open-set scenarios. In addition, the study indicates that simpler AI techniques, such as Convolutional Neural Network (CNN), are more effective at improving results.

1. Introduction

Throughout history, biometrics has been an essential tool for identifying individuals, beginning with simpler methods such as signatures and distinctive stamps, and evolving to more complex methods [1]. There is one objective that all of these techniques have, regardless of whether they were developed years ago or now, and that is to authenticate each individual and to prevent forgeries, thus ensuring security [2]. Nowadays, the techniques have become more complex and are divided into two categories: behavioral, which evaluate behaviors and manners, and physical, which evaluate a person’s physical characteristics [3]. Due to the need to directly measure part of a human body, such as fingerprints [4], palm prints [5], and iris prints [6], physical biometrics are more intrusive than psychological biometrics [7]. The use of behavioral biometrics can be less intrusive and independent of human interaction, as evidenced by the use of voice recognition [8] and gait analysis [9].
Biometrics based on gait, the way an individual walks, are an important aspect of security and medical fields due to the fact that they are not intrusive and can be monitored remotely [10]. However, this biometric faces some challenges, such as gait changes caused by injury, fatigue, or even aging that can affect the accuracy of the identification [11]. In order to overcome these challenges, various approaches have been developed, including the use of sensors such as Laser Imaging Detection and Ranging (LiDAR) sensors [12], normal cameras [13], thermal cameras [14], acoustic sensors [15], or the use of radars, such as Wireless Fidelity (Wi-Fi) radars [16], Long Range (LoRa) radars [17], Continuous Wave (CW) radars [18], Ultra-Wideband (UWB) radars [19] and Frequency Modulated Continuous Wave (FMCW) radars [20].
Among these methods, three radar techniques stand out, including CW radar, UWB radar, and FMCW radar, since they are capable of operating under adverse conditions such as low visibility [21] as well as respecting each individual’s privacy concerns by not taking photographs of the individual [22]. Due to their capability to capture gait information, even the smallest movements, through Doppler and micro-Doppler effects [23], these technologies are suitable for remote identification [24].
As any other technology, there are strengths and limitations. As a result of their simplicity, CW radars can only indirectly measure the target’s velocity [25,26]. It is already possible to measure both velocity and range from a target using UWB radar [27], and one of the best characteristics of UWB radars is their high resolution because of their high bandwidth [28]. However, their signal processing is more complex to implement due to the large amount of data to be processed [29]. The FMCW radar, which is a variation of the CW radar [30], modulates the signal frequency continuously to enable both range and velocity measurements [31]. Despite these advantages, the choice of the ideal technology for gait-based identification is still being debated and depends on several factors, including accuracy, achieved range, or the ability to identify multiple subjects.
An analysis of the literature comparing CW, UWB, and FMCW radars for gait-based identification is presented in this work. This comparison includes accuracy, operating frequency, bandwidth, dataset, range, detection, feature extraction, size and number of features extracted, multiple subject detection, radar modules used, the Artificial Intelligence (AI) used, and their properties of each radar type. The findings suggest that the FMCW radar is the most suitable choice for gait-based identification. In addition to the increased use of gait as a biometric indicator in various applications, this review is necessary due to the lack of consensus on which radar technology provides the most balance between accuracy, implementation complexity, and other factors. Additionally, there are no recent reviews that comprehensively evaluate the use of radar for gait-based identification. Furthermore, this study examines the State-of-the-Art (SOTA) of FMCW technology, including the AI techniques that achieve higher accuracy in addition to the factors discussed above.
In this paper, four sections are presented. Section 1 introduces biometric identification based on gait, highlighting radar technologies and describing what is studied. Section 2 presents a literature review for each of the CW, UWB, and FMCW radars. Section 3 discusses and concludes the most appropriate type of radar to use for gait-based identification. Finally, Section 4 summarizes the findings and suggests future research directions.

2. Gait-Based Identification

This section reviews the several studies on CW, UWB, and FMCW that have been conducted to acquire and process gait characteristics for biometric identification. Each subsection presents key information from the reviewed research, including the methodology and results. Moreover, a table is provided at the end of each subsection that compares studies using different radar types, namely Table 1 for studies on CW radar, Table 2 for studies on UWB radar, and Table 3 for studies on FMCW radar.

2.1. Identification Using Continuous-Wave Radar

A milestone in the evolution of radar systems was the CW radar. With this type of radar, micro-Doppler signatures are captured, and Deep Learning (DL) techniques are employed to identify the person. Between the studies conducted for this type of radar, Klarenbeek et al. [32], combined a Recurrent Neural Network (RNN) model with Long- Short Term Memory (LSTM) for the classification of human gait radar signatures. Using CW radar on X-band, this method achieved an accuracy of 89.1%, outperforming traditional classifiers such as Convolutional Neural Network (CNN), Principal Component Analysis (PCA), and Support Vector Machine (SVM) methods. A key advantage of the RNN-LSTM model is its ability to handle different observation times, resulting in improved performance when using longer observation times.
Cao et al. [33] introduced a system based on Deep Convolutional Neural Network (DCNN). As a result of using Short-Time Fourier Transform (STFT) to extract features, the method reported an average accuracy of 97.1% in identifying four individuals and an average accuracy of 68.9% in identifying a group of twenty individuals, demonstrating a robust anti-noise capability. Limitations of the system include the variability in individual motion patterns and the reduction in effectiveness in Non-Line-of-Sight (NLOS) conditions.
Meanwhile, Abdulatif et al. [34] investigated the effect of human body characteristics on their measured data. As a result of the use of Combining Convolutional Autoencoders (CAE) and Residual Network (ResNet) with 50 layers, their method achieved 98% accuracy with a low Signal Noise Ratio (SNR), demonstrating their resistance to noise. According to [18], further refinement of gait recognition was achieved by integrating the LSTM, CNN and RNN architectures, which provided both long-term stability and recognition accuracy of 99% on the validation set and 90% on the test set, respectively.
The use of Visual Geometry Group (VGG), a DCNN with 16 layers, by Papanastasiou et al. [35] is another significant advancement, which, combined with a Uniform Manifold Approximation and Projection (UMAP), a dimensionality reduction method to validate the results, achieved an accuracy of 93.5%. The separation of micro-Doppler signals from limb movements from those generated by torso by Qiao et al. [36] increased the effectiveness on identification with a Three-layer Convolutional Principal Component Analysis Network (CPCAN-3). Lastly, in [37], it was reported that STFT methods are comparable to higher-resolution methods, achieving an accuracy of 99.1%, suggesting that simplicity and robustness often outperform complexity. Based on the parameters used, the data acquired, and the results obtained, Table 1 compares the research conducted with CW radar in the previous years.

2.2. Identification Using Ultra-Wideband Radar

As a result of UWB radar technology, significant advancements in biometric identification have been made, showing various approaches in signal processing, feature extraction, and classifiers to achieve better accuracy [38].
Table 2. Comparison of previous studies for biometric identification using UWB radar.
Table 2. Comparison of previous studies for biometric identification using UWB radar.
Ref.DateFrequency
[GHz]
Dataset
Population
Range
[m]
DetectionFeatures
Extracted
Size and
Features
Dimensions
Multiple
Subject
Detection
Radar
Module
AI UsedAI PropertiesAccuracyEnvironment
[39]20173.1–5.66 M/2 F
164–178 cm
-BI
HM
BF
Raw DataScenarios tested: 384
Signature size: 61 frames
NONVA-R640SVMAlso tested:
RF
LR
KNN
NN
88.15%Over the entrance
of a room
[40]20193.2–5.48 M/7 F
154–179 cm
0–10BI
HM
Micro-Doppler
signatures
-NOP410 RCM
P410 MRM
No use--Anechoic chamber
Laboratory
[41]20193.9938 M/4 F
19–44 YO
155–179 cm
56–95 kg
WearableBI
HM
Interdistances
between sensors
Walking time: 60 s
Vector size elements: 72
Features functions: 12
NODecaWave
EVB1000
subspace kNN
weighted kNN
bagged tree
ESD
SVM
-96.9% (ESD)
96% (bagged tree)
95% (subspace kNN)
93% (SVM)
88% (weighted kNN)
Controlled
[42]20194.34 S
23–25 YO
172–192 cm
65–90 kg
0–5BI
HM
Micro-Doppler
spectograms
Samples per motion per person: 50
Spectrograms per motion per person:
50 (training set)
Spectrograms per motion per person:
100 (testing set)
Size spectrogram: 100 × 100
Test time: under 1 ms
NOBuildedMS-CNNADAM Optimizer
Learning rate: 1 × 10 3
Batch Size: 256
Max iteration: 1000
96.8% (walking)
84.8% (other activities)
Room
(Walking area clean)
[19]20194.310 M/5 F
160–187 cm
50–100 kg
5BI
HM
Micro-Doppler
spectograms
Spectrograms per motion per person: 3000
Total spectrograms training set: 22,500
Total spectrograms testing set: 7500
Size spectrogram: 100 × 100
NO-CNNADAM Optimizer
Learning rate: 1 × 10 4
Batch Size: 256
95.21% (running)
94.41% (walking)
82.93% (other activities)
Room
(Walking area clean)
[43]20203.1–5.36 S 3.5–7BI
HM
Limb Doppler
signals
Measurement time: 8 sNOPulsON® 400CNNStochastic gradient
descent
Learning rate: 1 × 10 3
93.3%Room
(Walking area clean)
[44]20213.2–5.413 M/11 F
153–179 Ccm
3BI
HM
BF
Backscattered energies
Knee angles
-NOP410 MRMNo use--Anechoic chamber
Laboratory
[45]20223.45–5.159 S0–10BI
HM
Time-Doppler
spectrograms
Total spectrograms: 20,494
Size spectrogram: 28 × 28
NO-OR-CEDStochastic gradient descent
Learning rate: 1 × 10 3
Epochs: 60
Mini Batches size: 32
96.17% (closed-set)
90.36% (open-set)
Room
(Walking area clean)
[46]20221.7
(bandwidth)
4 M/5 F0–10BI
HM
Micro-Doppler
signatures
Total spectrograms: +90,000
Echo segments: 720
Size spectrogram: 224 × 224
NOBuildedG-SACStochastic gradient descent
Learning rate: 1 × 10 3
Epochs: 60
Batches size: 16
91.62%Room
(Walking area clean)
[47]20233.45–5.1510 M/5 F
160–187 cm
50–100 kg
-BI
HM
Micro-Doppler
spectrograms
Spectrogram time duration: 1 s
Total training spectrograms: 108,000
Validation and testing spectrograms: 36,000
Size spectrogram: 120 × 120
NO-MS-CNNADAM Optimizer
Epochs: 600
Learning rate: 1 × 10 4
Batch size: 512
97% (walking)
85% (all tested actions)
-
[38]20236–8.55 M/6 F
23–28 YO
160–187 cm
51–85 kg
2–5BI
HM
Raw DataSize: 200 × 543 NOXethru
X4M03
MLRT
(based on CNN)
Time-distributed CNN
Output nodes:
13 (identification)
2 (fall-detection)
Loss function: cross-entropy
98.7% (identification)
96.5% (fall-detection)
Room
(Walking area clean)
[48]20243.1–4.84 M/5 F
163–187 cm
52–79 kg
0–10BI
HM
Micro-Doppler
signatures
Total spectrograms: +90,000
Echo segments: 720
Size spectogram: 224 × 224
NOBuildedDCNN
(ResNet-50)
Batch Stochastic gradient
Epochs: 30
Learning rate: 1 × 10 2
Batch size: 32
61.46% (open-set)Room
(Walking area clean)
M—Male, F—Female, S—Subjects, YO—Years Old; BI—Biometric Identification, HM—Human Motion, BF—Body Figure; - Information not available.
In 2017, Ghassem Mokhtari et al. [39] developed a system using UWB and Novelda sinuous antennas for identifying subjects based on raw data captured in the bandwidth of 3.1   GHz to 5.6   GHz . In the study, some AI techniques were combined with PCA resulting in a classification accuracy to 88.15%. Algorithms such as SVM, Random Forest (RF), Logistic Regression (LR), K Nearest Neighbour (KNN), and Neural Network (NN) were used. This demonstrated their ability to extract signatures from basic movements with ease. These methods outperformed technologies that utilize Passive Infrared (PIR) sensors, wireless networks, and ground vibration sensors. Two years later, Soumya Prakash Rana et al. [40] developed an Impulse Radio Ultra-Wideband (IR-UWB) radar with monostatic antennas to capture micro-Doppler signatures. They applied the first three-dimensional approach based on spherical trigonometry and STFT to identify persons based on their gait. The study analyzed various elevation angles and azimuth angles, concluding that as the range is increased, the SNR decreases logarithmically.
Alessio Vecchio et al. [41] took a different approach by using wearable UWB sensors; this method involves measuring the distance between sensors to verify that accuracy increases when using multiple sensors in specific positions on the body. In the same year, in [42], Yue Lang et al. studied individual identification through the analysis of different movement patterns using horn antennas to capture data from four subjects. Movements such as boxing, crawling, creeping, jumping, running, and walking were tested. To avoid overadjustments in small datasets, a Multi-Scale Convolutional Neural Network (MS-CNN) was selected as the classifier. As a result, the accuracy rate was 96.8% in walking activities and 84.8% as the average across all activities. The identification of individuals based on different movement patterns was improved by Yang Yang et al. [19], who examined micro-Doppler signatures from fifteen subjects, with performance for running of 95.21%, for walking of 94.41%, and for other activities of 82.93%.
As of 2020, Takuya Sakamoto et al. [43] achieved 93.3% accuracy in analyzing the limb Doppler signals from six individuals using a UWB radar operating at 4.2   GHz with a double ridge horn. This study demonstrated the importance of optimizing the input data size to reduce the learning time and avoid overfitting. Furthermore, Soumya Prakash Rana et al. [44] evaluated the knee angles in normal and abnormal gait from twenty-four subjects by comparing their efficiency with the Kinect system.
In 2022, Yang Yang et al. [45] proposed the Open-Set Classifier with Contrastive Embedding and Detection (OR-CED) for scenarios in which the number of training categories and the number of test categories differ, indicating an open-set test. If the training and test categories are the same, then a closed-set test is present. By using a classifier that utilizes a learning method to distinguish between hard positive and hard negative samples, they achieved 96.17% accuracy on closed-set scenarios and 90.36% accuracy on open-set scenarios. As part of another study conducted by the same researchers [46], the Guided Subspace Alignment under the Class-aware condition (G-SAC) model was implemented to measure the micro-Doppler signatures of nine subjects. Following the implementation of a Neighborhood Component Analysis (NCA), the researchers created an intrinsic feature subspace from which they could extract similarity between normal and disguised gaits. It was demonstrated that both supervised and unsupervised constraints could be used to ensure consistency in class-aware data distributions. This demonstrates the effectiveness of the method in identifying and validating multiple targets.
As another advancement, Yuan He et al. [47] implemented a MS-CNN model based on the contrastive learning framework. Using the multiscale feature extraction and image information interaction strategy, they were able to surpass previous studies, even when the SNR was low. Based on the results of the study, walking accuracy was 97% and the average of all activities tested was 85%, which is better than the studies performed in 2019.
Finally, advancements in the use of temporal and spatial characteristics were achieved through the Multi-task Learning Radar Transformer (MLRT) method by Xikang Jiang et al. [38]. This method integrated Kalman filters with mechanisms based on transformers, resulting in a 98.7% accuracy rate for identifying individuals. Additionally, the system was also developed to detect falls, achieving an accuracy rate of 96.5%. In addition, Yang Yang et al. [48] discussed how their Pseudoinvariant-features Separation for Domain Generalization (PSDG) model could be generalized to address the challenge of identifying disguised gaits in open scenarios. This approach resulted in an increase in accuracy of 5.51% when compared with other SOTA domain generalization techniques. In Table 2, the experimental results of each of the previous mentioned works are summarized.

2.3. Identification Using Frequency Modulated Continuous Wave Radar

While sharing similarities with CW radar, FMCW radar appears to overcome certain limitations such as the inability to measure range by integrating frequency modulation to enable distance estimation [31]. Xin Yang et al. [49] achieved 97.7% accuracy identifying a single user by using a CNN-based model that relies on lower limb motion and gait segmentation via silhouette. However, performance decreased to 91% when identifying two users and to 74% when identifying four users. A Deep Temporal Convolutional Neural Networks (DTCNN) was proposed in the same year by Addabbo et al. [50] to classify the micro-Doppler signatures with optimized temporal windows, resulting in the highest accuracy of 94.9% in all subjects. Also, Ni et al. [51] demonstrated 96.73% accuracy with ResNet-50 in a controlled environment.
Baori Zhou et al. [52] investigated the performance of CNNs on range-Doppler heat maps by testing algorithms such as AlexNet, VGGNet, GoogLeNet, and ResNet. With ResNet, they achieved the best accuracy, demonstrating a correlation between network depth and accuracy. Muhammad et al. [53] presented “gait cubes”, combining micro-Doppler and micro-Range signatures to create a CNN-based identification system capable of 96.1% accuracy, requiring minimal training data. In the following year, in 2021, Pegoraro et al. [54] used DCNNs with Inception Blocks (IBs) to investigate identification in complex environments such as corridors and real-life laboratory environments. They tested multi-user scenarios, obtaining 98.27% for four users at a time. As a validation result of Temporal Convolutional Network (TCN) in public datasets, Ref. [55] achieved an accuracy rate of 98.4% on a small dataset. However, the accuracy decreased as the dataset size increased.
The importance of novel loss functions and feature embeddings was highlighted in [56,57]. The Deep Discriminative Representation Network (DDRN) with Cosine Margin (CM) loss and the MS-CNN achieved accuracy rates of 95.94% and 88.57%, respectively. Xiang et al. [58] applied a Task Division and Multi-Task Learning (TD-MTL) module for pedestrian identification. Using a 25 min test set, they evaluated the model performance separately for pedestrian identification and validation. They achieved an accuracy rate of 87.63% for identifying five pedestrians and 80.78% for the validation set. The MGait developed in [59] overcame trajectory constraints. The study utilized a DCNN-based open-set classification approach for multi-user identification and intruder detection. MGait achieved 98.27% and 89.73% accuracy for two and five users, respectively.
Biometric fusion plays an important role in improving accuracy. A combination of vital signals and gait data was used by Alkasimi et al. [60], increasing accuracy over the use of a single biometric. Shi et al. [20] combined radar time-Doppler spectrograms with camera-derived Gait Energy Images (GEIs) to demonstrate the advantages of multimodal approaches for achieving robust identification. By integrating radar point clouds with micro-Doppler signatures, Ma and Liu [61] were able to improve system performance.
Table 3. Comparison of previous studies for biometric identification using FMCW radar.
Table 3. Comparison of previous studies for biometric identification using FMCW radar.
Ref.DateFrequency
[GHz]
Dataset
Population
Range
[m]
DetectionFeatures
Extracted
Size and
Features
Dimensions
Multiple
Subject
Detection
Radar
Module
AI UsedAI PropertiesAccuracyEnvironment
[49]202077–8110 S
21–34 YO
164–182 cm
45–74 kg
0–8BI
HM
BF
Lower
limb
motion
map
Size segments: 100 × 100 YESTexas Instruments
AWR1642BOOST
CNNActivation function: ReLU97.7% (single: 10)
91% (multiple: 2)
84% (multiple: 3)
74% (multiple: 4)
(single: 5)
Training != Test
78% (Lobby/Corridor)
74% (Corridor/Lobby)
Training == Test
97.7% (Lobby)
95% (Corridor)
[50]2020775 M
23–32 YO
178–185 cm
60–99 kg
-BI
HM
Micro-
Doppler
signatures
Total time: 150 min
Total training frames: 95,650
Total testing frames: 22,535
Total validation frames: 22,535
NO-DTCNNStochastic gradient descent
Activation function: Mish
Learning rate: 1 × 10 1
Batch Size: 64
Weight decay: 1 × 10 6
Loss function: cross-entropy
Top 1: 94.9%2 Rooms
[51]202076–7813 M/7 F
23–45 YO
155–182 cm
48–83 kg
3–10.5BI
HM
Micro-
Doppler
signatures
Total time: 15 min
Total training frames: 36,000
Total testing frames: 12,000
NOTexas Instruments
IWR1443 EVM
DCNN
(ResNet-50)
ADAM Optimizer
Learning rate:
1 × 10 4 2 × 10 3
Batch Size: 200
96.73%Classroom
(Walking area clean)
[52]202177–815 S -BI
HM
Range-
Doppler
heat map
Total frames: 3000
(1500 in each scenario)
Total training
heat maps: 12,000
Total testing heat maps: 3000
NOTexas Instruments
AWR1642BOOST
CNNADAM Optimizer
Learning rate: 8 × 10 5
(AlexNet)
1 × 10 4 (VGGNet)
3 × 10 4 (GoogLeNet)
1 × 10 4 (ResNet)
Epochs: 300
Batch Size: 62
Loss function: cross-entropy
96.9% (AlexNet)
97.6% (VGGNet)
97.7% (GoogLeNet)
97.9% (ResNet)
Indoor and Outdoor
(Walking area clean)
[53]202175.24–78.766 M/4 F
23–59 YO
160–174 cm
50–77 kg
2–10.91BI
HM
Micro-
Doppler
signatures
Micro-Range
signatures
Total gait cubes: 52,000NOTexas Instruments
IWR1443BOOST
CNNStochastic gradient descent
Learning rate: 1 × 10 2
Batch Size: 80
Loss function: cross-entropy
96.1%6 Areas
(3 Open spaces and
3 Corridors)
[54]202175–775 S 0–18BI
HM
Micro-
Doppler
signatures
Time bins spectrograms: 200YESINRASDCNN
with IBs
Stochastic gradient descent
Learning rate: 5 × 10 3
(4 S)
97.96% (multiple: 2)
95.26% (multiple: 3)
98.27% (multiple: 4)
Room A: Corridor
Room B: Laboratory
(S: 2)
Training != Test
96% (Room A/Room B)
[55]20215.8100 S
21–98 YO
149–198 cm
-BI
HM
Micro-Doppler
spectrograms
-NO-TCNLoss function: cross-entropy
Weight decay: 1 × 10 6
10 S: Nadam Optimizer
Activation function: Swish
Learning rate: 12 × 10 2
Batch Size: 64
98.4% (10 S)
85.2% (50 S)
84.9% (100 S)
(Public dataset)
Different
environments
[56]202176–7820 S3–10.5BI
HM
Micro-Doppler
signatures
Total training
samples: 18,000
Total testing samples: 6000
Size images: 224 × 224
NOTexas Instruments
IWR1443 EVM
DDRN
(ResNet-18)
ADAM Optimizer
Learning rate: 1 × 10 4
Loss function: CM
95.94%Controlled
[57]2022-5 M
23–32 YO
178–185 cm
69–99 kg
-BI
HM
Range-Doppler
map
Total time: 150 min
Sample size input: 45 × 205
NOINRASMS-CNNADAM Optimizer
Learning rate: 1 × 10 4
Epochs: 300
Weight decay: 5 × 10 4
Mini Batches size: 64
88.57%-
[58]2022-5 S -BI
HM
Time-Doppler
spectrograms
Size spectrogram: 45 × 205 YES-MCLLearning rate: 1 × 10 3
Epochs: 500
Loss function: cross-entropy
87.63% (test set)
80.78% (validation set)
-
[59]202275.85–79.155 M/5 F
23–33 YO
43–76 kg
3–23BI
HM
Micro-Doppler
signatures
Images size: 224 × 224 YES-DCNN
(ResNet-18)
ADAM Optimizer
Learning rate: 1 × 10 4
Loss function: large-margin
Gaussian mixture
98.27% (multiple: 2)
95.60% (multiple: 3)
92.47% (multiple: 4)
89.73% (multiple: 5)
Corridor
(Walking area clean)
[60]202275–7918 S
22–50 YO
Avg. 171 cm
Avg. 76 kg
-BI
HM
VS
Doppler-Frame
map
Heart-Sound
scalogram
Total time: 166 min
Total samples: 28,800
Images size: 224 × 224
NOTexas Instruments
AWR1642 EVM
DCNN
(GoogLeNet)
Learning rate: 1 × 10 4
Epochs: 30
Mini Batches size: 25
Gradient decay factor: 0.95
Squared gradient decay
factor: 0.99
98%
58.7% (only Heart sounds)
96.2% (only Gait)
Corridor
(Walking area clean)
[20]20237772 M/49 F
21–25 YO
155–187 cm
42–100 kg
-BI
HM
BF
Micro-Doppler
signatures
GEIs
Total info: 80 pairs
(GEIs and Time-Doppler
spectrograms)
GEIs size: 128 × 88
Spectrogram size: 88 × 128
NOTexas Instruments
AWR1843
DCNNADAM Optimization
Learning rate: 1 × 10 4
95.439%
87.198% (only GEIs)
54.232% (only radar)
92.178% (carrying a bag)
87.151% (wearing a coat)
Room
(Walking area clean)
[62]2023775 M/4 F
18–35 YO
155–185 cm
45–110 kg
1–10BI
HM
Point CloudsTotal frames: +36,000YESTexas Instruments
IWR1843BOOST
CNN + LSTM
(PointNet++)
ADAM Optimizer
Learning rate: 1 × 10 4
Batch Size: 16
96.75% (single: 9)
94.3% (multiple: 2)
95% (Laboratory)
80% (Corridor)
90% (Lobby)
[63]202375.2–78.84 M/3 F
156–187 cm
0–8.24BI
HM
Micro-Doppler
signatures
Micro-Range
signatures
Range Maps
Total time: 50 min
Total RDMs: 310,357
Frames per class
per subject: 7400
NOTexas Instruments
AWR1443BOOST
LSTMADAM Optimizer
Activation function: ReLU
Learning rate: 1 × 10 2
Epochs: 50
Batch Size: 512
Loss function: cross-entropy
93%In-home
(Real-life
scenario)
[64]202375.74–78.2611 M/ 4F
21–41 YO
160–183 cm
51–75 kg
3–18BI
HM
4D Radar point
cloud videos
Total frames: 45,000
Input size of the 4D RPCV:
50 × 64 × 4
NOMade by
Texas Instruments
Spatial–Temporal
Network (STN)
ADAM Optimizer
Learning rate: 1 × 10 4
Epochs: 250
Batch Size: 32
94.44% (10 S)
90.76% (15 S)
Outdoor
(Walking area
clean)
[65]2023779 S 0.8BI
HM
Micro-Doppler
signatures
-NOTexas Instruments
IWR1443BOOST
CNN + TCN
(ResNet)
GAM Optimization(5 S)
96.44%
(Identification)
98.29%
(Gesture)
Lobby
(Walking area
clean)
[61]20246–8.515 S
160–183 cm
51–75 kg
3–20BI
HM
Micro-Doppler
signatures
Radar point
clouds
Size signatures: 128 × 126
Size point cloud sequence:
50 × 4 × 48
NOMade by
Texas Instruments
DCNN + TCN
(PointNet)
ADAM Optimizer
Learning rate: 5 × 10 4
Epochs: 500
Loss function: Triplet loss
with Center loss
81.65%Room
3–16.5 m
walking area
clean
16.5–20 m
walking area
with desks
[66]2024774 M/4 F
25–33 YO
160–181 cm
45–85 kg
1.5–15.5BI
HM
Micro-Doppler
spectrograms
Total frames:
40,000 (single)
+5500 (multiple)
YESTexas Instruments
IWR1443 EVM
CNN
(EfficientNet)
ADAM Optimizer
Learning rate: 1 × 10 4
Batch Size: 32
Loss function:
cross-entropy
98.5% (single: 8)
98.25% (multiple: 2)
97.30% (multiple: 3)
95.45% (multiple: 4)
Lobby
(Walking area clean)
and Corridor
(Various
reflections)
[67]2024772 S -BI
HM
Micro-Doppler
signatures
Micro-Range
signatures
Training samples: 560
Validation samples: 560
Testing samples: 234
YESTexas Instruments
AWR1642ISK-ODS
CNN+TCN-96.2% (multiple: 2)(Scenario)
Training != Test
67%
M—Male, F—Female, S—Subjects, YO—Years Old; BI—Biometric Identification, HM—Human Motion, BF—Body Figure, VS—Vital Signals; - Information not available.
To ensure resilience against environmental changes, Dang et al. [62] integrated PointNet++ and Gated Recurrent Unit (GRU) into PGGait. The system extracts point clouds and uses them as inputs for the NN, achieving over 96.75% accuracy; it also showed that the accuracy of the system varied minimally with different walking directions. By using a cloud-based system with Multiple-Input Multiple-Output (MIMO) FMCW radar, Abedi et al. [63] demonstrated the possibility of performing real-time activity recognition by applying an LSTM-based model.
Micro-Doppler approaches, using 4D radar point clouds and Density-Based Spatial Clustering of Applications (DBSCAN) clustering, have been developed for tracing trajectories in outdoor environments. To eliminate noise and improve targeting, algorithms such as two-dimensional constant false alarm rate (2D-CFAR) and DBSCAN were employed in [64], achieving an accuracy of 94.4%. A network based on the Identity Recognition Network (IDRVT) method was developed by Ding et al. [65], which incorporated local, global, and temporal information using the combination of CNNs and TCNs. By using Gradient Norm Aware Minimization (GAM), the accuracy improved to 96.44%.
Finally, in 2024, Li et al. [66] developed MCGait, which can identify multiple users without limitations on walking trajectories using micro-Doppler signatures and an EfficientNet architecture. It achieved 98.5% accuracy in single-person identification and 98.25% accuracy in multisubject identification, even in scenarios with many reflections. Additionally, Huang et al. [67] demonstrated the generalization of micro-Doppler signature in a test scenario differing from the training scenario. When the test scenario matched the training data, the model achieved 96.2% accuracy. However, in a previously unseen scenario, performance decreased to 67% accuracy, highlighting the challenge of domain adaptation. To facilitate understanding of the discussed studies, Table 3 provides a clearer comparative analysis of all reviewed works.

3. Discussion

A review of the literature relating to CW, UWB, and FMCW radars for gait-based identification is presented in this section. The main objective is to summarize the key findings, compare these three technologies, and suggest which is the most suitable for this application. To summarize and compare the reviewed data, Table 1, Table 2 and Table 3 were utilized. These tables provided the data used to create the figures presented in this section.
Although all technologies are capable of capturing unique gait patterns, their structures play a critical role in their performance, which is the main factor in this field study. In general, the more accurate a system is, better suited it is for real-world applications. However, it should also consider other factors besides its accuracy.
Figure 1 shows the number of articles that have been published in this field. It is evident that FMCW radar has been studied for this application, exceeding the 50% mark in the number of research studies conducted. Less than half of the articles focused on UWB radar studies and CW radar studies, in which UWB radar was more commonly used than CW radar.
The operating frequency and bandwidths of the systems vary from one another, directly influencing the capability to detect targets and the range resolution [68]. As shown in Figure 2, CW radars typically operate within the 23   GHz to 26   GHz range, UWB radars within 3   GHz to 6   GHz range, and FMCW radars with a central frequency around 77   GHz . These values are due to the fact that researchers tend to use commercially available radar hardware as opposed to developing custom systems. Researchers who have studied UWB radar are the only ones who have constructed their own radars, as can be seen in [42,46,48]. Despite being more time-consuming, the development of custom radar systems is usually better as it allows researchers to design according to their needs and can potentially lead to cheaper systems. When purchasing a radar kit, a potential limitation on its resolution can be attributed to their bandwidth, which ranges between 1.7   GHz and 2.5   GHz for UWB radar kits and between 2   GHz and 4   GHz for FMCW radar kits.
In Figure 3, it can be seen that accuracy tends to increase with the increase in bandwidth. An increase in bandwidth results in an increase in resolution [69], which leads to an increase in accuracy. A number of other factors influence accuracy, such as the dataset diversity, the detection of multiple subjects, and the use of Machine Learning (ML) methods. Additionally, the graph indicates that studies with more participants are more likely to have higher accuracy. This is counterintuitive, as an increase in the number of participants usually introduces more variability, making accurate classification more difficult.
As shown in Figure 4, where all accuracies are taken into account from all studies, the accuracy tends to remain constant with increasing bandwidth. This situation can be considered normal, since it encompasses studies involving multiple subject detections as well as open-set scenarios, and studies examining the accuracy of activities other than walking. Based on the accuracy values in both figures, FMCW radar operates effectively at bandwidths above 2   GHz , and the UWB radar is often designed for use below a bandwidth of 2.5   GHz .
There has also been a trend observed in the radar operating range used by the researchers. Despite the fact that all radar types tend to not pass the 10   m detection range, as shown in Figure 5, studies with CW radar achieved detection ranges of 25   m in [35,36] and with FMCW radar detection ranges were 20   m and 23   m in [59,61], respectively. UWB radar has the lowest range, showing a maximum of 10   m , indicating that gait-based identification at longer distances remains a challenge. Their wide bandwidth may improve range resolution but disperses transmit power, resulting in reduced power spectral density. Consequently, signal attenuation increases, SNR decreases, and detection range is reduced [70]. As a result, there is a gap in the field concerning longer ranges, and it is a subject worth studying since only four works were able to extend beyond the 20   m range.
As the range increases, the accuracy is expected to decrease due to signal attenuation or interference from the environment [71]. According to Figure 6, the majority of studies with lower ranges tend to have higher accuracy due to lower signal attenuation [72].
It is also important to consider both the size and variability of the datasets used to train the classifier as well as the number of subjects in the test group. When a classifier is trained on a large and diverse set of data, its robustness and generalization are generally enhanced, which allows it to better simulate real-world conditions. In contrast, smaller datasets may limit the classifier’s effectiveness in real-world scenarios, as is the case of [54], which achieved an accuracy of 98.27% with a training set of five subjects and a testing group of four subjects. A good example of study with a large dataset diversity is [60], which was able to achieve an accuracy rate of 98% with a dataset of 18 subjects. By merging gait information with heart sounds, extracted from a heart sound scanner, they were able to achieve this high level of accuracy.
This leads to the next important factor, the combination of biometrics. According to Figure 7, only a few studies have attempted to combine biometrics and radar, more specifically the FMCW radar studies. Ref. [60] combines Doppler-frame mapping with heart sounds scalograms to improve the accuracy in relation to the studies that use a single biometric. There is another investigation of combining biometrics in [20], where micro-Doppler signatures are combined with GEIs, as well as in [61], where micro-Doppler signatures are combined with 4D radar point cloud sequences.
Studies [20,60,61] have the same objective, which is to demonstrate that a combination of extracted features is more effective than using only one input in a classifier. However, Ref. [61] does not show this, obtaining a very low accuracy of 81.65%, due to challenges with cross-view conditions, resulting in a lower median value in Figure 8. According to Figure 8, we can conclude that the combination of biometrics is beneficial, improving the systems accuracy in most cases, but there may be scenarios in which this approach is not effective.
By analyzing the features extracted from the tables, it can be concluded that micro-Doppler signatures and spectrograms are the most frequently used features. However, it is important to note that these signatures may also include environmental noise and variations in the body. In addition, “gait cubes” are introduced in [53], which combine micro-Doppler, micro-Range, and temporal data to reduce the need of large training datasets.
These micro-Doppler signatures and spectrograms are the most relevant feature to be extracted to use in the classifier due to their ability to capture fine-grained motion patterns of different body patterns, such as the head, torso, or limbs. These features are also capable of capturing the unique characteristics of each individual, such as walking speed, stride length, and step frequency [66].
The features extracted represent the data to be fed into the classification model, and their size and number affect the system accuracy directly. It is common for features like micro-Doppler spectrograms and micro-Doppler signatures to have larger dimensions, such as 224 × 224 or 227 × 227 , which provides a detailed description of human movements and at the same time increases computational demands and the risk of overfitting. Overall, the choice of the number and size of features depends on the environment and the application. When operating in controlled environments, higher dimensions can be supported to maximize accuracy, as in [56], while real-time systems can benefit from lower dimensions [63]. Therefore, it is necessary to establish a balance between the dimensions of the features and the computational efficiency of the model.
To establish an appropriate balance between feature dimensions and computational efficiency, the intended application must be considered. In real-time scenarios, models based on lower-dimensional features are generally preferred, as they enable faster processing. Conversely, architectures such as ResNet-50 or other DCNNs using input sizes of 224 × 224 or higher tend to achieve superior accuracy, although the cost of computational demands increased.
In order to classify these features, machine learning methods are used. This illustrates the importance of incorporating AI into the system. Figure 9 illustrates the frequency that each AI technique is applied for gait-based identification. It can be concluded from Figure 9 that CNNs, DCNNs, and their variants are the most popular options in this field. The hybrid approaches of [18,65] combine the strengths of CNNs with temporal dependencies and spectrum analysis to improve performance. In this analysis, we observe that simpler methods, such as CNNs, are the most commonly used, although there is a growing increase in the use of DL, as in the case of hybrid methods and DCNNs.
Regarding AI properties, it can be seen that optimizers such as Adaptive Moment Estimation (ADAM) Optimizer, loss functions such as cross-entropy, and activation functions such as Rectified Linear Unit (ReLU) are frequently used. Hyperparameters, such as learning rates, batch sizes, and epochs, were adjusted according to the data complexity and feature dimensions to maximize performance.
It is also important to take into account the identification of multiple subjects when selecting an AI algorithm. According to Figure 10, it is still a topic that needs to be discussed. As of the time of writing this work, only eight works from the thirty-nine have been able to identify multiple subjects, none of which were from UWB radars.
According to Figure 11, when multi-subject detection is achieved, it commonly uses CNNs and DCNNs.
Researchers generally achieved high accuracy values using various methods for multi-user identification. Among these methods, segmenting individual steps and analyzing lower limb movements [49] or using Kalman filters in conjunction with DCNNs to achieve person tracking and identification [54] proved to be effective options. However, the number of simultaneously identified individuals did not exceed five subjects on studies with FMCW radar and two subjects with CW radar. Additionally, from Figure 11, it is evident that the accuracy decreases with an increase in the number of users due to an increase in signal interference and overlapping patterns, which make it more difficult for the algorithms to distinguish between individuals. Despite the achievement of multiple subject detection, the majority of the studies were conducted in controlled environments, challenging their applicability to real-world scenarios.
A possible solution to this may be the use of DCNN, as it is the AI technique with the highest accuracy and the only one capable of identifying five subjects simultaneously. Furthermore, the performance of DCNN-based models tends to remain consistent between single-subject and multi-subject scenarios, providing that the FMCW radar is capable of effectively separating individual subjects in space [73].
In this way, the environment has an impact on the measured accuracy. For example, if the test is conducted in a controlled environment or not, in the same location as training tests or not, and if there is a greater or lesser degree of signal reflection.
When analyzing the three tables, the studies were conducted in controlled environments, usually in empty rooms, lobbies, and corridors, lacking real-life testing. However, some studies attempted to determine whether the classification would perform as well in a testing environment different from the one used for training. This work was only made with FMCW radar, and according to the results shown in Figure 12, there is a decrease of 20.32% in accuracy when the training environment is different from the testing environment.
The presence of static and dynamic objects creates unwanted reflections and noise, which can distort the radar signal and affect the accuracy of the classifier. Studies usually take the data from environments that have no obstacles, such as corridors, lobbies, and empty rooms, to avoid these issues. However, in some studies, such as [62], the obstacles are present in the corners of the room, or in [66] the corridor tested has large windows and glass walls. This can generate unwanted reflections and ghost targets, simulating real-world scenarios.
In order to eliminate noise and unwanted reflections from the environment, the raw data signal undergoes a preprocessing stage before being applied to a classifier. For classification, this step is crucial to ensuring that only relevant features are extracted. For noise filtering, steps such as 2D-Fast Fourier Transform (FFT) and interference suppression are essential. There is usually a significant reduction in accuracy when the pre-processing module is omitted [64].
There is also the possibility of open-set scenarios, where the radar captures data from an intruder, a person who has not taken part in the training data for the classifier. According to [59], a DCNN-based classifier along with DBSCAN algorithm is used to identify the intruder, achieving 91.36% with one intruder per five persons, and decreasing as the number of intruders increases. An open-set scenario with UWB radar was also explored in [48], in which unknown gaits were evaluated, with DCNN, more specifically with ResNet-50, achieving 61.46% accuracy. It is important to consider that once systems are used in real-world scenarios, they cannot rely solely on trained datasets and must be capable of rejecting subjects unknown to the system.
Taking a closer look at Figure 13 and Figure 14, it is possible to verify that CW has the lowest median value of accuracy, indicating that this type of radar is the least effective for achieving gait-based identification. Possibly, this is the result of the simple radar architecture, which makes it impossible to measure the range (distance) and can only measure, indirectly, velocity. Thus, the CW radar is the worst possible option for gait-based identification.
Both FMCW and UWB radars have high median accuracy values, but FMCW has the highest. This can be attributed to the fact that, since UWB systems are more complex and expensive to implement when compared to FMCW, they see less use and are less researched, thus having a lower median. As discussed previously in this section, FMCW radars are capable of detecting multiple subjects and have longer ranges than UWB radars.
Considering the FMCW radar has higher accuracy, ability to measure both range and velocity, lower cost, and ease of implementation, it appears to be a promising option for gait-based identification.
In light of this, focusing on FMCW as the preferred type for gait-based identification, Figure 15 and Figure 16 illustrate how much a given AI technique is used for gait-based identification. Based on the analysis of both figures, CNNs, DCNNs, and their variants are the most popular choices in this area, as seen in Figure 9. This is possibly due to their ability to extract detailed spatial features more effectively than other methods [74]. The use of other methods, such DDRN and MCL, is unusually combined with this radar technology. There have also been very few hybrid approaches, having only combined DCNN with TCN [61], CNN with LSTM [62], and CNN with TCN [65,67]. Combining CNN or DCNN with TCN can be advantageous, since it integrates the spatial feature extraction capabilities of NN with the temporal modeling capabilities of TCN.
In addition, the accuracy of these two architectures, CNN and DCNN, is greater than that of all other methods, as shown in Figure 17 and Figure 18. Furthermore, the variability of the results in these two ML methods is lower than in others, indicating their consistency in achieving high accuracy values. As a result of this stability, CNNs and DCNNs appear to provide reliable performance on a wide variety of datasets and under diverse conditions. It is important to note that DCNNs are capable of very high performance, as illustrated in Figure 17, but they are also more complex, more time-consuming, and require a larger amount of data [75]. The median value of the DCNN decreased when considering all accuracy values, as illustrated in Figure 18. Considering this, DCNNs may be capable of achieving peak performance, but their accuracy may vary depending on specific training conditions or datasets. Although other techniques are also used in the area, the resources are concentrated on the two mentioned previously.
Overall, the literature review identifies some trends and gaps in the field of gait-based identification using radar. The trends are listed as follows:
  • The FMCW radar technology is the subject of most research;
  • An increase in bandwidth is associated with an increase in resolution, which increases accuracy;
  • In AI, CNNs and DCNNs are the most commonly used techniques;
  • Micro-Doppler signatures and spectrograms are usually inputs to the classifier;
  • Using multimodal systems increases the accuracy of identification;
  • The majority of the studies are conducted in controlled environments.
The gaps are as follows:
  • Datasets with low diversity;
  • Systems with a short range: few works have exceeded 20 m;
  • Multimodal systems are rarely utilized;
  • The number of works detecting multiple subjects is very low;
  • In most cases, controlled environments are used for testing.

4. Conclusions

In this article, different radar technologies are compared, CW, UWB, and FMCW radar, to determine the best choice of radar technology for gait-based identification in real-world scenarios, focusing on performance and implementation complexity. An array of factors were considered in the analysis, including the accuracy of the identification, the operating frequency, the bandwidth, the dataset, the range, the detection, the feature extraction, the detection of multiple subjects, and the use of AI. Based on the results, it was found that the CW radar, once considered the most simple, presents the lowest accuracy value compared with the others radar systems. When comparing UWB and FMCW radars, FMCW is more accurate, simpler to implement, and can detect multiple subjects and achieve higher ranges. The use of AI techniques for biometric identification is another important aspect. Among the methods studied, CNNs and DCNNs are the most efficient, mainly in micro-Doppler signature features. In addition, the fusion of biometrics can enhance accuracy; however, this possibility has only been studied by a few researchers. Despite the improvements shown, most studies were conducted under controlled environments and with small datasets. Additionally, most of the research was conducted in closed-set scenarios. Further work can be undertaken to validate the systems on real-world scenarios, to test multi-subject identification with more than five users, to explore biometric fusions, and to develop algorithms for open-set scenarios. Emerging AI techniques such as self-supervised learning offer promising solutions to reduce the reliance on large datasets, which remains a limitation in current gait identification studies. Future research should investigate the integration of these methods to improve robustness, especially in open-set or multi-subject scenarios and under real-world variability. In summary, FMCW radar still has much potential to be explored and developed, but it has already demonstrated its advantages for applications such as gait-based identification.

Author Contributions

Conceptualization, A.R., P.P. and D.A.; methodology, B.F.; validation, B.F., Á.F., A.R., B.S., P.P. and D.A.; formal analysis, B.F., P.P. and D.A.; investigation, B.F.; data curation, B.F., A.R., P.P. and D.A.; writing—original draft preparation, B.F.; writing—review and editing, B.F., Á.F., A.R., B.S., P.P. and D.A.; supervision, P.P. and D.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by the Fundação para a Ciência e Tecnologia (FCT) through Fundo Social Europeu (FSE) and by Programa Operacional Regional do Centro under the PhD grants 2023.00385.BDANA and 2024.00376.BD. It was also funded by national funds through FCT-Fundação para a Ciência e a Tecnologia, I.P., under the project UIDB/50008/2020, and the DOI identifier https://doi.org/10.54499/UIDB/50008/2020.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gorodnichy, D.O. Evolution and evaluation of biometric systems. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009; pp. 1–8. [Google Scholar]
  2. Serratosa, F. Security in biometric systems. arXiv 2020, arXiv:2011.05679. [Google Scholar]
  3. Al-Nima, R.R.; Dlay, S.; Woo, W. A new approach to predicting physical biometrics from behavioural biometrics. Int. J. Comput. Inf. Eng. 2014, 8, 2001–2006. [Google Scholar]
  4. Yang, W.; Wang, S.; Hu, J.; Zheng, G.; Valli, C. Security and accuracy of fingerprint-based biometrics: A review. Symmetry 2019, 11, 141. [Google Scholar] [CrossRef]
  5. Han, C.C.; Cheng, H.L.; Lin, C.L.; Fan, K.C. Personal authentication using palm-print features. Pattern Recognit. 2003, 36, 371–381. [Google Scholar]
  6. Winston, J.J.; Hemanth, D.J. A comprehensive review on iris image-based biometric system. Soft Comput. 2019, 23, 9361–9384. [Google Scholar] [CrossRef]
  7. Prabha, R.S.; Vidhyapriya, R. Intruder detection system based on behavioral biometric security. J. Sci. Ind. Res. 2017, 76, 90–94. [Google Scholar]
  8. Rashid, R.A.; Mahalin, N.H.; Sarijari, M.A.; Aziz, A.A.A. Security system using biometric technology: Design and implementation of voice recognition system (VRS). In Proceedings of the 2008 International Conference on Computer and Communication Engineering, Kuala Lumpur, Malaysia, 13–15 May 2008; pp. 898–902. [Google Scholar]
  9. Cola, G.; Avvenuti, M.; Vecchio, A. Real-time identification using gait pattern analysis on a standalone wearable accelerometer. Comput. J. 2017, 60, 1173–1186. [Google Scholar]
  10. Katiyar, R.; Pathak, V.K.; Arya, K. A study on existing gait biometrics approaches and challenges. Int. J. Comput. Sci. Issues (IJCSI) 2013, 10, 135. [Google Scholar]
  11. Boyd, J.E.; Little, J.J. Biometric gait recognition. In Advanced Studies in Biometrics: Summer School on Biometrics, Alghero, Italy, 2–6 June 2003. Revised Selected Lectures and Papers; Springer: Berlin/Heidelberg, Germany, 2005; pp. 19–42. [Google Scholar]
  12. Yamada, H.; Ahn, J.; Mozos, O.M.; Iwashita, Y.; Kurazume, R. Gait-based person identification using 3D LiDAR and long short-term memory deep networks. Adv. Robot. 2020, 34, 1201–1211. [Google Scholar]
  13. Singh, J.P.; Singh, U.P.; Jain, S. Model-based person identification in multi-gait scenario using hybrid classifier. Multimed. Syst. 2023, 29, 1103–1116. [Google Scholar]
  14. Batchuluun, G.; Yoon, H.S.; Kang, J.K.; Park, K.R. Gait-based human identification by combining shallow convolutional neural network-stacked long short-term memory and deep convolutional neural network. IEEE Access 2018, 6, 63164–63186. [Google Scholar]
  15. Wang, Y.; Chen, Y.; Bhuiyan, M.Z.A.; Han, Y.; Zhao, S.; Li, J. Gait-based human identification using acoustic sensor and deep neural network. Future Gener. Comput. Syst. 2018, 86, 1228–1237. [Google Scholar]
  16. Xiao, Z.; Zhou, S.; Wen, X.; Ling, S.; Yang, X. Pattern-independent human gait identification with commodity WiFi. In Proceedings of the 2024 IEEE Wireless Communications and Networking Conference (WCNC), Dubai, United Arab Emirates, 21–24 April 2024; pp. 1–6. [Google Scholar]
  17. Yin, Y.; Zhang, X.; Lan, R.; Sun, X.; Wang, K.; Ma, T. Gait recognition algorithm of coal mine personnel based on LoRa. Appl. Sci. 2023, 13, 7289. [Google Scholar] [CrossRef]
  18. Dong, S.; Xia, W.; Li, Y.; Zhang, Q.; Tu, D. Radar-based human identification using deep neural network for long-term stability. IET Radar Sonar Navig. 2020, 14, 1521–1527. [Google Scholar]
  19. Yang, Y.; Hou, C.; Lang, Y.; Yue, G.; He, Y.; Xiang, W. Person identification using micro-Doppler signatures of human motions and UWB radar. IEEE Microw. Wirel. Components Lett. 2019, 29, 366–368. [Google Scholar]
  20. Shi, Y.; Du, L.; Chen, X.; Liao, X.; Yu, Z.; Li, Z.; Wang, C.; Xue, S. Robust gait recognition based on deep CNNs with camera and radar sensor fusion. IEEE Internet Things J. 2023, 10, 10817–10832. [Google Scholar]
  21. Gao, X.; Roy, S.; Xing, G.; Jin, S. Perception through 2D-MIMO FMCW automotive radar under adverse weather. In Proceedings of the 2021 IEEE International Conference on Autonomous Systems (ICAS), Montreal, QC, Canada, 11–13 August 2021; pp. 1–5. [Google Scholar]
  22. Vales, V.B.; Domínguez-Bolaño, T.; Escudero, C.J.; García-Naya, J.A. An IoT system for smart building combining multiple mmWave FMCW radars applied to people counting. IEEE Internet Things J. 2024, 11, 35306–35316. [Google Scholar]
  23. Tahmoush, D. Review of micro-Doppler signatures. IET Radar Sonar Navig. 2015, 9, 1140–1146. [Google Scholar] [CrossRef]
  24. Niazi, U.; Hazra, S.; Santra, A.; Weigel, R. Radar-based efficient gait classification using Gaussian prototypical networks. In Proceedings of the 2021 IEEE Radar Conference (RadarConf21), Atlanta, GA, USA, 7–14 May 2021; pp. 1–5. [Google Scholar]
  25. Gouveia, C. Bio-Radar: Sistema de Aquisição de Sinais Vitais Sem Contacto. Ph.D. Thesis, Universidade de Aveiro, Aveiro, Portugal, 2023. [Google Scholar]
  26. Boric-Lubecke, O.; Lubecke, V.M.; Droitcour, A.D.; Park, B.K.; Singh, A. Doppler Radar Physiological Sensing; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  27. He, X.; Nie, W.; Zhou, L.; Zhou, M. A target velocity estimation approach based on UWB radar. In Proceedings of the 2024 International Conference on Microwave and Millimeter Wave Technology (ICMMT), Beijing, China, 16–19 May 2024; Volume 1, pp. 1–3. [Google Scholar]
  28. Saad, M.; Maali, A.; Azzaz, M.S.; Bouaraba, A.; Benssalah, M. Development of an IR-UWB radar system for high-resolution through-wall imaging. Prog. Electromagnet Res. C 2022, 124, 81–96. [Google Scholar] [CrossRef]
  29. Bennet, M.A.; Narmatha, J.; Pavithra, B.; Suvetha, P.; Sandhyalakshmi, A. Hardware implementation of UWB radar for detection of trapped victims in complex environment. Int. J. Smart Sens. Intell. Syst. 2017, 10, 236–258. [Google Scholar]
  30. Vasconcelos, M.; Nallabolu, P.; Li, C. Range resolution improvement in FMCW radar through VCO’s nonlinearity compensation. In Proceedings of the 2023 IEEE Topical Conference on Wireless Sensors and Sensor Networks, Las Vegas, NV, USA, 22–25 January 2023; pp. 53–56. [Google Scholar]
  31. Kwak, S.; Jeon, D.; Lee, S. Adjusting detectable velocity range in FMCW radar systems through selective sampling. IEEE J. Sel. Areas Sens. 2024, 1, 249–260. [Google Scholar] [CrossRef]
  32. Klarenbeek, G.; Harmanny, R.; Cifola, L. Multi-target human gait classification using LSTM recurrent neural networks applied to micro-Doppler. In Proceedings of the 2017 European Radar Conference (EURAD), Nuremberg, Germany, 11–13 October 2017; pp. 167–170. [Google Scholar]
  33. Cao, P.; Xia, W.; Ye, M.; Zhang, J.; Zhou, J. Radar-ID: Human identification based on radar micro-Doppler signatures using deep convolutional neural networks. IET Radar Sonar Navig. 2018, 12, 729–734. [Google Scholar] [CrossRef]
  34. Abdulatif, S.; Aziz, F.; Armanious, K.; Kleiner, B.; Yang, B.; Schneider, U. Person identification and body mass index: A deep learning-based study on micro-Dopplers. In Proceedings of the 2019 IEEE Radar Conference (RadarConf), Boston, MA, USA, 22–26 April 2019; pp. 1–6. [Google Scholar]
  35. Papanastasiou, V.; Trommel, R.; Harmanny, R.; Yarovoy, A. Deep learning-based identification of human gait by radar micro-Doppler measurements. In Proceedings of the 2020 17th European Radar Conference (EuRAD), Utrecht, The Netherlands, 10–15 January 2021; pp. 49–52. [Google Scholar]
  36. Qiao, X.; Feng, Y.; Shan, T.; Tao, R. Person identification with low training sample based on micro-Doppler signatures separation. IEEE Sens. J. 2022, 22, 8846–8857. [Google Scholar] [CrossRef]
  37. Shioiri, K.; Saho, K. Exploration of effective time-velocity distribution for Doppler-radar-based personal gait identification using deep learning. Sensors 2023, 23, 604. [Google Scholar] [CrossRef] [PubMed]
  38. Jiang, X.; Zhang, L.; Li, L. Multi-task learning radar transformer (MLRT): A personal identification and fall detection network based on IR-UWB radar. Sensors 2023, 23, 5632. [Google Scholar] [CrossRef]
  39. Mokhtari, G.; Zhang, Q.; Hargrave, C.; Ralston, J.C. Non-wearable UWB sensor for human identification in smart home. IEEE Sens. J. 2017, 17, 3332–3340. [Google Scholar] [CrossRef]
  40. Rana, S.P.; Dey, M.; Ghavami, M.; Dudley, S. Non-contact human gait identification through IR-UWB edge-based monitoring sensor. IEEE Sens. J. 2019, 19, 9282–9293. [Google Scholar] [CrossRef]
  41. Vecchio, A.; Cola, G. Method based on UWB for user identification during gait periods. Healthc. Technol. Lett. 2019, 6, 121–125. [Google Scholar] [CrossRef]
  42. Lang, Y.; Wang, Q.; Yang, Y.; Hou, C.; He, Y.; Xu, J. Person identification with limited training data using radar micro-Doppler signatures. Microw. Opt. Technol. Lett. 2020, 62, 1060–1068. [Google Scholar] [CrossRef]
  43. Sakamoto, T. Personal identification using ultrawideband radar measurement of walking and sitting motions and a convolutional neural network. arXiv 2020, arXiv:2008.02182. [Google Scholar]
  44. Rana, S.P.; Dey, M.; Ghavami, M.; Dudley, S. 3-D gait abnormality detection employing contactless IR-UWB sensing phenomenon. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
  45. Yang, Y.; Ge, Y.; Li, B.; Wang, Q.; Lang, Y.; Li, K. Multiscenario open-set gait recognition based on radar micro-Doppler signatures. IEEE Trans. Instrum. Meas. 2022, 71, 1–13. [Google Scholar] [CrossRef]
  46. Yang, Y.; Yang, X.; Sakamoto, T.; Fioranelli, F.; Li, B.; Lang, Y. Unsupervised domain adaptation for disguised-gait-based person identification on micro-Doppler signatures. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6448–6460. [Google Scholar] [CrossRef]
  47. He, Y.; Guo, H.; Zhang, X.; Li, R.; Lang, Y.; Yang, Y. Person identification based on fine-grained micro-Doppler signatures and UWB radar. IEEE Sens. J. 2023, 23, 21421–21432. [Google Scholar] [CrossRef]
  48. Yang, Y.; Zhao, D.; Yang, X.; Li, B.; Wang, X.; Lang, Y. Open-scenario-oriented human gait recognition using radar micro-Doppler signatures. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 6420–6432. [Google Scholar] [CrossRef]
  49. Yang, X.; Liu, J.; Chen, Y.; Guo, X.; Xie, Y. MU-ID: Multi-user identification through gaits using millimeter wave radios. In Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020; pp. 2589–2598. [Google Scholar]
  50. Addabbo, P.; Bernardi, M.L.; Biondi, F.; Cimitile, M.; Clemente, C.; Orlando, D. Gait recognition using FMCW radar and temporal convolutional deep neural networks. In Proceedings of the 2020 IEEE 7th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Pisa, Italy, 22–24 June 2020; pp. 171–175. [Google Scholar]
  51. Ni, Z.; Huang, B. Human identification based on natural gait micro-Doppler signatures using deep transfer learning. IET Radar Sonar Navig. 2020, 14, 1640–1646. [Google Scholar] [CrossRef]
  52. Zhou, B.; Lu, J.; Xie, X.; Zhou, H. Human identification based on mmWave radar using deep convolutional neural network. In Proceedings of the 2021 3rd International Symposium on Smart and Healthy Cities (ISHC), Toronto, ON, Canada, 28–29 December 2021; pp. 90–94. [Google Scholar]
  53. Ozturk, M.Z.; Wu, C.; Wang, B.; Liu, K.R. Gait-based people identification with millimeter-wave radio. In Proceedings of the 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 14 June–31 July 2021; pp. 391–396. [Google Scholar]
  54. Pegoraro, J.; Meneghello, F.; Rossi, M. Multiperson continuous tracking and identification from mm-wave micro-Doppler signatures. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2994–3009. [Google Scholar] [CrossRef]
  55. Addabbo, P.; Bernardi, M.L.; Biondi, F.; Cimitile, M.; Clemente, C.; Orlando, D. Temporal convolutional neural networks for radar micro-Doppler based gait recognition. Sensors 2021, 21, 381. [Google Scholar] [CrossRef] [PubMed]
  56. Ni, Z.; Huang, B. Open-set human identification based on gait radar micro-Doppler signatures. IEEE Sens. J. 2021, 21, 8226–8233. [Google Scholar] [CrossRef]
  57. Huang, Y.; Jiang, E.; Xu, H.; Zhang, G. Person identification using a new CNN-based method and radar gait micro-Doppler signatures. J. Phys. Conf. Ser. 2022, 2258, 012044. [Google Scholar]
  58. Xiang, Y.; Huang, Y.; Xu, H.; Zhang, G.; Wang, W. A multi-characteristic learning method with micro-Doppler signatures for pedestrian identification. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 8–12 October 2022; pp. 3794–3799. [Google Scholar]
  59. Ni, Z.; Huang, B. Gait-based person identification and intruder detection using mm-wave sensing in multi-person scenario. IEEE Sens. J. 2022, 22, 9713–9723. [Google Scholar] [CrossRef]
  60. Alkasimi, A.; Shepard, T.; Wagner, S.; Pancrazio, S.; Pham, A.V.; Gardner, C.; Funsten, B. Dual-biometric human identification using radar deep transfer learning. Sensors 2022, 22, 5782. [Google Scholar] [CrossRef] [PubMed]
  61. Ma, C.; Liu, Z. mDS-PCGR: A bi-modal gait recognition framework with the fusion of 4D radar point cloud sequences and micro-Doppler signatures. IEEE Sens. J. 2024, 24, 8227–8240. [Google Scholar] [CrossRef]
  62. Dang, X.; Tang, Y.; Hao, Z.; Gao, Y.; Fan, K.; Wang, Y. PGGait: Gait recognition based on millimeter-wave radar spatio-temporal sensing of multidimensional point clouds. Sensors 2023, 24, 142. [Google Scholar] [CrossRef] [PubMed]
  63. Abedi, H.; Ansariyan, A.; Morita, P.P.; Wong, A.; Boger, J.; Shaker, G. AI-powered noncontact in-home gait monitoring and activity recognition system based on mm-wave FMCW radar and cloud computing. IEEE Internet Things J. 2023, 10, 9465–9481. [Google Scholar] [CrossRef]
  64. Ma, C.; Liu, Z. A novel spatial–temporal network for gait recognition using millimeter-wave radar point cloud videos. Electronics 2023, 12, 4785. [Google Scholar] [CrossRef]
  65. Ding, J.; Xu, Z.; Li, D.; Yang, J.; Chen, Z. A novel identity recognition network for person identification via radar micro-Doppler signatures. In Proceedings of the 2023 Cross Strait Radio Science and Wireless Technology Conference (CSRSWTC), Guilin, China, 10–13 November 2023; pp. 1–3. [Google Scholar]
  66. Li, J.; Li, B.; Wang, L.; Liu, W. Passive multi-user gait identification through micro-Doppler calibration using mmWave radar. IEEE Internet Things J. 2023, 11, 6868–6877. [Google Scholar] [CrossRef]
  67. Petchtone, P.; Worasawate, D.; Pongthavornkamol, T.; Fukawa, K.; Chang, Y. Experimental results on FMCW radar based human recognition using only Doppler information. In Proceedings of the 2024 21st International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Khon Kaen, Thailand, 27–30 May 2024; pp. 1–5. [Google Scholar]
  68. Gu, C.; Zhang, Z.; Liu, J.; Mao, J. Characterization of the frequency ramp nonlinearity impact on the range estimation accuracy and resolution in LFMCW radars. IEEE Trans. Instrum. Meas. 2023, 72, 1–12. [Google Scholar] [CrossRef]
  69. Lee, S.; Kim, M.; Jung, Y.; Lee, S. Signal extension method for improved range resolution of frequency-modulated continuous wave radar in indoor environments. Appl. Sci. 2024, 14, 9456. [Google Scholar] [CrossRef]
  70. Shanmugan, K. Estimating the power spectral density of ultra wideband signals. In Proceedings of the 2002 IEEE International Conference on Personal Wireless Communications, New Delhi, India, 15–17 December 2002; pp. 124–128. [Google Scholar] [CrossRef]
  71. Berenguer, M.; Lee, G.; Sempere-Torres, D.; Zawadzki, I. A variational method for attenuation correction of radar signal. In Proceedings of the ERAD, Delft, The Netherlands, 18–22 November 2002; Volume 11. [Google Scholar]
  72. Wen, C.; Zenghui, L.; Kan, J.; Jian, Y.; Chunmao, Y. Long-distance imaging with frequency modulation continuous wave and inverse synthetic aperture radar. IET Radar Sonar Navig. 2015, 9, 653–659. [Google Scholar] [CrossRef]
  73. Sacco, G.; Mercuri, M.; Hornung, R.; Visser, H.; Lorato, I.; Pisa, S.; Dolmans, G. A SISO FMCW radar based on inherently frequency scanning antennas for 2-D indoor tracking of multiple subjects. Sci. Rep. 2023, 13, 16701. [Google Scholar] [CrossRef] [PubMed]
  74. Bodapati, J.D.; Veeranjaneyulu, N. Feature extraction and classification using deep convolutional neural networks. J. Cyber Secur. Mobil. 2019, 261–276. [Google Scholar] [CrossRef]
  75. Li, G.; Togo, R.; Ogawa, T.; Haseyama, M. Dataset complexity assessment based on cumulative maximum scaled area under Laplacian spectrum. Multimed. Tools Appl. 2022, 81, 32287–32303. [Google Scholar] [CrossRef]
Figure 1. Number of articles per type of radar.
Figure 1. Number of articles per type of radar.
Remotesensing 17 01282 g001
Figure 2. Number of articles per frequency used.
Figure 2. Number of articles per frequency used.
Remotesensing 17 01282 g002
Figure 3. Accuracy in function of the bandwidth. The circle size is proportional to the dataset size (number of subjects). Only the best accuracy values reported in each study are considered.
Figure 3. Accuracy in function of the bandwidth. The circle size is proportional to the dataset size (number of subjects). Only the best accuracy values reported in each study are considered.
Remotesensing 17 01282 g003
Figure 4. Accuracy in function of the bandwidth. The circle size is proportional to the dataset size (number of subjects). All accuracy values reported in each study are considered.
Figure 4. Accuracy in function of the bandwidth. The circle size is proportional to the dataset size (number of subjects). All accuracy values reported in each study are considered.
Remotesensing 17 01282 g004
Figure 5. Number of articles per range achieved.
Figure 5. Number of articles per range achieved.
Remotesensing 17 01282 g005
Figure 6. Accuracy in function of the range achieved. Only the best accuracy values reported in each study are considered.
Figure 6. Accuracy in function of the range achieved. Only the best accuracy values reported in each study are considered.
Remotesensing 17 01282 g006
Figure 7. Number of articles with multimodal and single biometrics.
Figure 7. Number of articles with multimodal and single biometrics.
Remotesensing 17 01282 g007
Figure 8. Accuracy of works with multimodal or single biometrics. Only the best accuracy values reported in each study are considered.
Figure 8. Accuracy of works with multimodal or single biometrics. Only the best accuracy values reported in each study are considered.
Remotesensing 17 01282 g008
Figure 9. Number of articles per AI technique. The first value in each cell indicates the total number of articles, while the values inside parentheses represent the number of articles across different radar types: (CW/UWB/FMCW). The blank cells indicate that no articles have been published for the specific combination.
Figure 9. Number of articles per AI technique. The first value in each cell indicates the total number of articles, while the values inside parentheses represent the number of articles across different radar types: (CW/UWB/FMCW). The blank cells indicate that no articles have been published for the specific combination.
Remotesensing 17 01282 g009
Figure 10. Number of articles with multi-subject detection and single-subject detection.
Figure 10. Number of articles with multi-subject detection and single-subject detection.
Remotesensing 17 01282 g010
Figure 11. Accuracy of AI techniques in function of the simultaneous user numbers.
Figure 11. Accuracy of AI techniques in function of the simultaneous user numbers.
Remotesensing 17 01282 g011
Figure 12. Effect of training and testing environment consistency on classifier accuracy. Only the best accuracy values reported in each study are considered.
Figure 12. Effect of training and testing environment consistency on classifier accuracy. Only the best accuracy values reported in each study are considered.
Remotesensing 17 01282 g012
Figure 13. Boxplot of accuracy as a function of radar type. Only the best accuracy values reported in each study are considered.
Figure 13. Boxplot of accuracy as a function of radar type. Only the best accuracy values reported in each study are considered.
Remotesensing 17 01282 g013
Figure 14. Boxplot of accuracy as a function of radar type. All accuracy values reported in each study are considered.
Figure 14. Boxplot of accuracy as a function of radar type. All accuracy values reported in each study are considered.
Remotesensing 17 01282 g014
Figure 15. Number of articles per AI technique on FMCW radar.
Figure 15. Number of articles per AI technique on FMCW radar.
Remotesensing 17 01282 g015
Figure 16. Percentage of AI techniques used in FMCW radar studies.
Figure 16. Percentage of AI techniques used in FMCW radar studies.
Remotesensing 17 01282 g016
Figure 17. Boxplot of accuracy as a function of AI technique on FMCW radar. Only the best accuracy values reported in each study are considered.
Figure 17. Boxplot of accuracy as a function of AI technique on FMCW radar. Only the best accuracy values reported in each study are considered.
Remotesensing 17 01282 g017
Figure 18. Boxplot of accuracy as a function of AI technique on FMCW radar. All accuracy values reported in each study are considered.
Figure 18. Boxplot of accuracy as a function of AI technique on FMCW radar. All accuracy values reported in each study are considered.
Remotesensing 17 01282 g018
Table 1. Comparison of previous studies for biometric identification using CW radar.
Table 1. Comparison of previous studies for biometric identification using CW radar.
Ref.DateFrequency
[GHz]
Dataset
Population
Range
[m]
DetectionFeatures
Extracted
Size and
Features
Dimensions
Multiple
Subject
Detection
Radar
Module
AI UsedAI PropertiesAccuracyEnvironment
[32]2017-25 M/4 F
18–47 YO
-BI
HM
Micro-Doppler
signatures
Spectrograms length: 192 (dataset A) 39 time bins (dataset B) Time frame: 1.25 sYES-LSTM-RNNADAM Optimizer Epochs: 100
Learning rate: 1 × 10 3
Batch Size: 512
89.1% (dataset A: 192 steps) 87.07% (dataset B: 39 steps)-
[33]20182412 M/12 F
24–28 YO
157–186 cm
48–75 kg
0–10BI
HM
Micro-Doppler
signatures
Each person spectrograms: 4000 Size input spectrogram: 227 × 227 NOIVS-179DCNN
(AlexNet)
Caffe Network Learning rate: 1 × 10 4
Batch Size: 32 (training) 16 (testing) Weight decay: 1 × 10 4
Hidden nodes: 406
97.1% (4 S)
90.9% (6 S)
89.1% (8 S)
85.6% (10 S)
77.4% (12 S)
77.6% (16 S)
68.9% (20 S)
Outdoor
(Walking area clean)
[34]20192517 M/5 F
162–195 cm
54–115 kg
3–10BI
HM
Micro-Doppler
signatures
Size input image: 256 × 256 × 3 NO-CNN
(ResNet-50)
ADAM Optimizer98%Treadmill
[18]2020243 M/4 F
20–25 YO
160–175 cm
46–70 kg
3–10BI
HM
Micro-Doppler
signatures
Features from LSTM: 2048 Features from CNN+RNN: 2048NOIVS-179CNN+RNN
LSTM
ADAM Optimizer Epochs: 100
Learning rate: 1 × 10 4
Batch Size: 16
99% (validation set)
90% (test set)
Corridor
(Walking area clean)
[35]20211016 M/6 F
21–55 YO
167–207 cm
3–25BI
HM
Micro-Doppler
signatures
Spectrogram time duration: 1.25 s Total spectrograms: 12,803 Size input spectrogram: 128 × 192 NO-DCNN
(VGG-16)
ADAM Optimizer Epochs: 500
Learning rate: 1 × 10 4
Learning rate decay: 1 × 10 5
Mini Batches size: 32
93.5%Outdoor
(Walking area clean)
[36]20225.86 M/4 F
23–33 YO
156–185 cm
56–86 kg
3–25BI
HM
Torso Doppler
signals
Total samples: 1200 Each person samples: 120 Test samples size: 320 × 420 NOSDR-KIT 580BCPCAN-3SVM classifier Loss function: hinge loss92.3%Corridor
(walking area clean)
[37]20232422 M/3 F
Avg. 22.5 YO
4–12BI
HM
Gait
time-velocity
images
Window length: 32 samples (53.3 ms) Total images: 2625NOBSS-110CNN
(ResNet-16)
Learning rate: 1 × 10 2 3 × 10 4 Batch Size: 64 Loss function: cross-entropy99.1%Outdoor: Walkway
(Walking area clean)
M—Male, F—Female, S—Subjects, YO—Years Old; BI—Biometric Identification, HM—Human Motion; - Information not available.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Figueiredo, B.; Frazão, Á.; Rouco, A.; Soares, B.; Albuquerque, D.; Pinho, P. A Review: Radar Remote-Based Gait Identification Methods and Techniques. Remote Sens. 2025, 17, 1282. https://doi.org/10.3390/rs17071282

AMA Style

Figueiredo B, Frazão Á, Rouco A, Soares B, Albuquerque D, Pinho P. A Review: Radar Remote-Based Gait Identification Methods and Techniques. Remote Sensing. 2025; 17(7):1282. https://doi.org/10.3390/rs17071282

Chicago/Turabian Style

Figueiredo, Bruno, Álvaro Frazão, André Rouco, Beatriz Soares, Daniel Albuquerque, and Pedro Pinho. 2025. "A Review: Radar Remote-Based Gait Identification Methods and Techniques" Remote Sensing 17, no. 7: 1282. https://doi.org/10.3390/rs17071282

APA Style

Figueiredo, B., Frazão, Á., Rouco, A., Soares, B., Albuquerque, D., & Pinho, P. (2025). A Review: Radar Remote-Based Gait Identification Methods and Techniques. Remote Sensing, 17(7), 1282. https://doi.org/10.3390/rs17071282

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop