Next Article in Journal
Towards Prediction and Mapping of Grassland Aboveground Biomass Using Handheld LiDAR
Previous Article in Journal
Large-Scale Estimation of Hourly Surface Air Temperature Based on Observations from the FY-4A Geostationary Satellite
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Activity Classification Based on Dual Micro-Motion Signatures Using Interferometric Radar

1
School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
2
Department of Electrical Engineering, College of Engineering, Taif University, Al-Hawiyah, Taif 21974, Saudi Arabia
3
Department of Electrical Engineering, Umm Al-Qura University, Makkah 21955, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Remote Sens. 2023, 15(7), 1752; https://doi.org/10.3390/rs15071752
Submission received: 23 January 2023 / Revised: 18 March 2023 / Accepted: 21 March 2023 / Published: 24 March 2023

Abstract

:
Micro-Doppler signatures obtained from the Doppler radar are generally used for human activity classification. However, if the angle between the direction of motion and radar antenna broadside is greater than 60°, the micro-Doppler signatures generated by the radial motion of human body reduce significantly, thereby degrading the performance of the classification algorithm. For the accurate classification of different human activities irrespective of trajectory, we propose a new algorithm based on dual micro-motion signatures, namely, the micro-Doppler and interferometric micro-motion signatures, using an interferometric radar. First, the motion of different parts of the human body is simulated using motion capture (MOCAP) data, which is further utilized for radar echo signal generation. Second, time-varying Doppler and interferometric spectrograms obtained from time-frequency analysis of a single Doppler receiver and interferometric output data, respectively, are fed as input to the deep convolutional neural network (DCNN) for feature extraction and the training/testing process. The performance of the proposed algorithm is analyzed and compared with a micro-Doppler signatures-based classifier. Results show that a dual micro-motion-based DCNN classifier using an interferometric radar is capable of classifying different human activities with an accuracy level of 98%, where Doppler signatures diminish considerably, providing insufficient information for classification. Verification of the proposed classification algorithm based on dual micro-motion signatures is also performed using a real radar test dataset of different human walking patterns, and a classification accuracy level of approximately 90% is achieved.

1. Introduction

Human activity is considered as the time-varying movement of the whole body or various positions of the limbs against gravity [1]. Human activity classification has gained tremendous popularity during the last two decades, with a broad spectrum of applications in the areas of security, remote sensing for smart environments, surveillance, anti-terrorism investigation, man-machine interface via gesture recognition, biomedical engineering, rehabilitation, and assisted living [2]. Based on its characteristics of efficiently operating under all types of weather and lighting conditions, long-range detection, and penetration through-the-wall (TTW), radar sensors generally dominate optical sensors in activity classification systems.
Humans are among one of the primary objects identified by radars. Micro-Doppler signatures are produced by the non-rigid body motions of humans, in addition to the Doppler signatures generated by the bulk motion of the torso [3]. In the case of a high signal-to-noise ratio (SNR), these signatures are discernible for TTW applications too [4]. Based on micro-Doppler signatures, there exist different algorithms in the literature for human target classification [4,5,6]. An algorithm for the detection and classification of walking humans by exploring micro-Doppler signatures obtained from a continuous wave (CW) radar has been presented in ref. [7]. Micro-Doppler signatures were exploited for classification among humans, vehicles and animals in refs. [8,9]. The feasibility of classifying different activities of humans using micro-Doppler signatures has been investigated in ref. [10]. To improve human activity classification performance by exploiting micro-Doppler distributions, an algorithm for classification employing complex value CNN has been presented in ref. [11]. Micro-Doppler radar signatures of human activity for vital signatures measurement has been investigated in ref. [12]. Micro-Doppler signatures-based human activity classification is an area of extensive research worldwide [13,14,15,16].
A major disadvantage of observing a randomly moving human with a Doppler radar is the intrinsic dependence on the relative radial velocity of the human. When a human is moving tangentially with respect to the radar antenna’s broadside, the Doppler frequency shift caused by the radial velocity of the target reduces to zero. In the case of an aspect angle greater than 60°, the micro-Doppler signatures diminish significantly, providing insufficient data for classification [10,17,18,19]. Moreover, the Doppler radar cannot discern the movements with horizontal symmetry, since they both generate identical micro-Doppler signatures. Doppler radar networks and multiple-input multiple-output (MIMO) radars are proposed to resolve this issue and enhance the performance of a micro-Doppler-based classifier by observing the target from different angles [20]. Automatic target recognition using multistatic micro-Doppler signatures of personnel has been presented in ref. [21]. The multistatic micro-Doppler radar has also been used for human activity classification [22,23]. However, computational complexity, integrated hardware requirement, and high costs make the Doppler radar networks and MIMO radars a less feasible solution for practical systems [19].
Detection and classification of moving humans irrespective of their trajectories with respect to the radar is very critical, which can be achieved by measuring the angular velocity along with radial velocity. An interferometric radar which consists of two nominally identical receiving antennas placed at a geometrical distance can provide both the radial and angular velocities measurements. For a human passing through the interferometric beam pattern, an oscillation is induced in the output of the correlator. The frequency of this oscillation is directly related to the angular velocity of the target [4,24,25]. Since the radial velocity and angular velocity provide complementary measurements, dual micro-motion signatures generated by 2-D (radial and angular) velocity of the targets can jointly be utilized to classify different human activities irrespective of the trajectory [19]. Radial and transversal micro-motion features obtained from the interferometric radar have been used for hand gesture recognition in ref. [26]. Similarities between dual micro-motion features allow the application of micro-Doppler classification algorithms to the interferometric radar micro-motion classification. Implementation of a dual micro-motion signatures-based classification algorithm will result in performance enhancement for non-cooperative human motion scenarios [27].
This paper proposes a new human activity classification algorithm based on dual micro-motion signatures obtained from the CW interferometric radar, as delineated in Figure 1. First of all, the human body model is generated using motion capture data, the instantaneous range of different body parts relative to observing radar is computed, and the radar returns for seven different human activities are calculated. Human activities considered in this work include walking, running, jumping, punching, bending, climbing and sitting. Secondly, human motion near to the movement surface, that is, crawling, creeping, and swimming, are considered for radar echo generation. Thirdly, to analyze micro-motion characteristics, short-time Fourier transform (STFT) is applied to the data received by a single channel and the interferometric output to obtain the Doppler and interferometric spectrograms, respectively. Then, these time-varying spectrograms are fed to the DCNN architecture as input for the training and testing process. Finally, performance of the proposed classification algorithm in terms of accuracy is discussed.
The major contributions of this paper are listed below.
  • Propose a human activity classification method based on micro-Doppler and interferometric micro-motion signatures using a DCNN classifier.
  • Demonstrate the performance analysis and comparison of the proposed algorithm with micro-Doppler signatures-based classifiers for motion capture (MOCAP) simulated data for seven human activity classes.
  • Apply real measurement data for different walking activities of humans to the proposed classification method to prove the effectiveness of the algorithm.
This paper is organized as follows. Section 2 discusses various human activity classes included in this research. Section 3 introduces radar return simulation of human activities using the MOCAP database. Section 4 presents the interferometric data processing technique and human micro-motion features analysis. The DCNN architecture, parameter selection and training process is explained in Section 5. Section 6 and Section 7 discuss MOCAP simulated data and real measurement data results verifying the usefulness of the proposed algorithm. Lastly, Section 8 presents the conclusion.

2. Human Activity Classes

During any human activity, the motion of different parts of the human body generates oscillations in the echo signal of the radar. Different human activities modulate the radar’s echo signal differently, thereby generating different micro-motion signatures. Seven human activities considered in this research include (i) walking, (ii) running, (iii) jumping, (iv) punching, (v) bending, (vi) climbing, and (vii) sitting/standing. These activities have been illustrated in Figure 2. The description of each activity is presented in Table 1.

3. Radar Return Simulation of Human Activities

To investigate different human activities’ micro-motion features, it is essential to model the human body’s structure and to generate the echo signals for different activities. A biomechanical model of the human body presented by Boulic approximates different parts of the body using ellipsoids of different radar cross-sections (RCS) [28]. However, Boulic’s model slightly differs from the actual motion of the human body, and can represent only one type of human activity. The actual motion of a non-cooperative human is random, which is difficult to express exactly using mathematical formulas. Another approach to model the motion of various parts of the human body during activity is to use the MOCAP dataset. In MOCAP, the instantaneous ranges of different parts of the human body relative to observing radar are calculated based on real measurement data of motion trajectories, which are further used to generate the radar received signal. The human body is considered as a hierarchical structure made by joining 28 nodes with all parts defined by line segments. Therefore, the coordinated motion of the whole human body is simplified to a set of human skeletal motions [11].
MOCAP animation data consist of two parts. The first part is named “skeleton” in BioVision’s BVH data format. The data in Acclaim’s ASF/AMC format are contained in a file with a “.asf” extension. This part describes the initial posture of the human body. It provides information on the bone structure, along with their names, orientation, and the length of different bones, the human body’s center of gravity, and the number of degrees of freedom (DOF) for the root joint with respect to the initial posture. The second part is named "motion" in BVH data format. The data in ASF/AMC format are stored in a file with a “.amc” extension. This part provides the animation motion data of the human body, which is described in terms of the rotation Euler angle [29].
Since MOCAP data provide the coordinate information of the root node relative to the global coordinates for each frame, the coordinates of each human body part can be computed to obtain the instantaneous relative range of each part. To simulate the radar echo signal, the two receiving antennas for the interferometric radar are placed in a 3D coordinate frame at ( 4 , 0 , 1 ) and ( 4 , 1 , 1 ) with a baseline length of D = 1 m. The carrier frequency of the CW radar is 12 GHz.

4. Data Processing

4.1. Interferometric Radar Processing

For an arbitrary moving object, the instantaneous linear velocity is a vector quantity with magnitude representing the position change rate, pointing along the tangent to its path. The linear velocity v consists of two orthogonal components with respect to the observing radar, that is, radial velocity and cross-radial velocity [19]. The radial velocity, v r = v cos α , points along the radar line-of-sight (LoS), whereas the cross-radial velocity, v c r = v sin α , is at a right angle to the radar LoS. Here, α represents the angle between the linear velocity vector v and the radar LoS [19].
The interferometric radar can measure the 2D velocity of the targets moving in the radar field of view (FoV). It comprises two nominally similar receiving antennas placed at geometrical separation D, as shown in Figure 3. The radial velocity can be measured using receiving antenna Rx1. For a target in relative motion with respect to the radar, a Doppler frequency shift is induced in the radar echo signal. The relation between the radial velocity, v r , and the Doppler frequency shift, f d , is defined by [19],
v r = f d λ 2 = c f d 2 f c ,
where f c denotes the carrier frequency, λ represents the wavelength, and c is the speed of light. The angular velocity of the target is measured using the interferometric output. The angular motion of the target generates an oscillation in the interferometric output. The frequency of this oscillation is directly related to the angular velocity of the target.
For a CW radar with a carrier frequency f c monitoring a monochrome point source as a target in the far-field, the signal transmitted is represented by
S T ( t ) = e x p { j 2 π f c t }
The signal returned from the moving object is delayed as compared to the transmitted signal, depending on the relative range of the target. The signal received by the first antenna Rx1 can be expressed as [19]:
S R 1 ( t ) = e x p { j 2 π f c ( t τ ) } ,
where τ = 2 R c and R is the range of the target relative to the observing radar. Considering the signal received by the first antenna as a reference, the signal received by the second antenna Rx2 is delayed by τ 0 due to geometrical separation between the two receiving antennas. The signal received by the second antenna can be expressed as [19]:
S R 2 ( t ) = e x p { j 2 π f c ( t τ τ 0 ) } ,
where
τ 0 = D sin φ c
According to the interferometric radar principle, the two received signals are correlated together to generate the interferometric response y c ( t ) .
y c ( t ) = S R 1 ( t ) S R 2 * ( t ) = e x p { j 2 π f c τ 0 }
The time derivative of the phase in Equation (6) provides the interferometric frequency shift caused by the angular velocity of the target [19].
f a = f c τ 0 t = D cos φ λ ω D λ ω
Hence, the relation between the angular velocity and the interferometric frequency shift can be written as
ω = f a λ D .
The cross-radial velocity is responsible for the angular displacement of the target with respect to the radar. The angular velocity, ω , represents the change rate of angle-of-arrival (AOA), φ , in rad/s. The angular velocity and cross-radial velocity hold the following relationship [19],
ω = φ t = v c r R .
The terms angular velocity ω and cross-radial velocity v c r can be used interchangeably, unless otherwise stated. A dual-mode CW interferometric radar is used to obtain the micro-Doppler and interferometric micro-motion signatures of the human body, generated due to the radial velocities and the angular velocities of different parts of the body, respectively, performing physical activity.

4.2. Time-Frequency Analysis

Radar returns obtained from humans are generally time-varying signals. Since the traditional Fourier transform is not capable of showing frequency information with respect to time, it cannot be used for the analysis of human activities. In the literature, there exist numerous time-frequency (TF) analysis methods, including STFT, Wavelet transform (WT), and the Winger–Ville distribution (WVD). However, STFT, being the simplest technique, is generally used for TF analysis of the radar signal. Discrete-time STFT can be defined as:
S ( m , n ) = i = + s ( i ) w ( i m ) e x p { j 2 π n i } ,
where s ( i ) is the radar’s received signal, w ( i ) denotes the window function, m represents time sampling points, and n represents the frequency sampling points. In the proposed research, the MOCAP dataset is used for simulations. The MOCAP animation data consist of two parts. The first part was named the “skeleton in BioVision’s BVH data format with .asf” extension. This part describes the initial posture of the human body. It provides information on the bone structure, along with their names, orientation, and lengths of different bones, the body’s center of gravity, and number of degrees of freedom (DOF) for the root joint relative to posture. The second part is named “motion” in BVH data format with an “.amc” extension. This part provides the animation motion data of the human body, which is described in terms of the rotation Euler angle of the node relative to the parent node. The Doppler spectrogram and the interferometric spectrogram were obtained by performing STFT to the received signal S R 1 ( t ) and the interferometric output y c ( t ) , respectively. The sampling frequency is f s = 1000 Hz. A 256 sample Kaiser window with 16 overlapping samples was used for STFT. The size of the spectrogram image is 656 × 875 . First, the RGB to gray-scale image conversion is performed, which is followed by the resizing operation resulting in a 128 × 128 matrix.

4.3. Micro-Motion Characteristics of Different Human Activities

The micro-Doppler spectrograms and the interferometric spectrograms of different human activities for the left to right movement are represented in Figure 4 and Figure 5, respectively.
It is evident that different human activities generate distinct micro-motion signatures. In each activity, the bulk motion of the torso generates the strongest return signal with small oscillations. The swinging of the upper and lower limbs generates rich micro-motion signatures with the highest peaks; however, the strength of the return signal from the limbs varies depending on the aspect angle of the radar. The Doppler spectrogram of human walking parallel to the antenna array baseline in Figure 4a represents a very small spread of frequency due to the bulk motion of the torso and swinging motion of the limbs, providing insufficient information for feature extraction. However, the interferometric spectrogram of the same activity in Figure 5a represents a wide spread of frequency due to the motion of the torso. Moreover, the frequency peaks due to the movement of the upper and lower limbs are also higher as compared to their counterparts in the Doppler spectrogram.
Among seven different human activities, the running activity generates the largest frequency shifts caused by a bulk motion of the torso, as shown in Figure 4b and Figure 5b. The peak frequencies representing the movement of limbs are also higher, and the period of swinging limbs is smaller for the running activity than that of the walking activity. In the forward jump represented in Figure 4c and Figure 5c, the motion of the torso and limbs can be observed. An interferometric spectrogram of forward jumping activity has a wider frequency spectrum, providing distinct information about the motion of the upper and lower limbs as compared to the Doppler spectrogram.
The spectrograms for the boxing activity of punching shown in Figure 4d and Figure 5d represent very slight movement of the torso with a thin spread of frequency and the periodic motion of the arms with positive and negative frequency peaks. The peak frequencies generated due to the rapid movement of the arms is the highest for boxing among all activities. In the activity of bending and picking up with one hand represented in Figure 4e and Figure 5e, the movement of the torso, upper legs and arms can clearly be observed. The highest peak frequencies represent the movement of the right arm, and lower peak frequencies represent the movement of the upper and lower legs.
The micro-motion signatures of climbing activity in Figure 4f and Figure 5f show the ascending motion of the torso and limbs during the first 6 s, and descending motion during the next 6 s, with an opposite frequency sign. The peak frequencies represent the movement of the upper and lower limbs. The spectrograms of the last activity of sitting/standing in Figure 4g and Figure 5g show the high-frequency peaks depicting the transition stage between the standing and sitting posture caused by the motion of the torso and upper legs, specifically. The portions of a strong signal with torso returns, as well as no high-frequency peaks represent the still state after the transition with only slight movements of the limbs. Since all human activities are being performed at an aspect angle of 90°, the interferometric micro-motion signatures show a wide spread of interferometric frequencies providing distinct features, as compared to the micro-Doppler signatures.
Micro-Doppler signatures of slow walk from left to right (L-R) and right to left (R-L) at an aspect angle of 90 are shown in Figure 6a,b, respectively. Since these two micro-Doppler spectrograms are horizontally symmetric due to their similar radial velocities relative to the radar, distinguishing between these two activities only based on a micro-Doppler spectrogram is difficult. However, it is worth observing that the interferometric micro-motion signatures of the two activities in Figure 6c,d provide distinct features which are evident from the strength of the torso return and peak frequency locations.
Similarly, the micro-Doppler spectrograms of a normal walk from L-R and R-L in Figure 7a,b show horizontal symmetric signatures. However, their interferometric spectrograms in Figure 7c,d present different interferometric micro-motion signatures due to their opposite angular velocities. Due to the distinct features of horizontally symmetric activities and increased resolution of interferometric micro-motion signatures in the frequency domain, human activity classification using dual micro-motion signatures ensures the provision of more accurate results.

5. Activity Classification Algorithm

Conventional human activity classification algorithms involve three steps. The first step is feature extraction of human activity from time-frequency spectrograms. The second step includes dividing the extracted features into the training dataset and testing dataset followed by sending the feature samples from the training dataset to a classifier for the learning process. The third step involves sending the feature samples from the testing dataset to the trained classifier for indicating respective classes [11]. Classical machine learning (ML) algorithms used for human activity recognition and classification include the hidden Markov model (HMM) algorithm [30], Bayesian algorithm [31], K-nearest neighbor (KNN) algorithm [32], random forest algorithm [33], support vector machine (SVM) [10], and so forth, which have less computational complexity. However, imperfections in these algorithms result in lack of robustness and generalization. Since the features are extracted heuristically, traditional classification algorithms are application-specific. Manual extraction of features causes performance deterioration for classification. Additionally, traditional ML classifiers trained with small datasets cannot deal with a real-time data stream. Contrarily, deep learning techniques extract deep features automatically based on hierarchical structures. Moreover, the progress in parallel processing has made deep learning classification algorithms more suitable for various applications with larger datasets [19,34].
Recently, deep CNN algorithms have commonly been used in human activity recognition and classification [11], speech recognition [35], optical image recognition [36], natural language processing [37], and so forth. CNN-based human activity classification algorithms enhance the accuracy level of classification when compared with conventional classifiers [38,39]. Therefore, in the current work, we have used the deep CNN classifier for human activity classification based on micro-motion signatures obtained from the interferometric radar.

5.1. Deep Convolutional Neural Network (DCNN)

The design and operation of DCNN is based on the biological neural network. It stacks multiple layers of a simple neural network architecture for extracting the hierarchical abstraction and generalization from a data set. DCNN is a machine learning algorithm that attempts to learn mapping between the input data point and its corresponding label provided by human annotators [40]. RGB images of dual micro-motion signatures obtained from interferometric radar are converted to grayscale images and resized to reduce the computational burden on a processing unit. This step is known as pre-processing, as shown in Figure 8. The two-dimensional (2D) grayscale spectrogram images thus obtained are fed as input to the DCNN classifier. The DCNN model consists of four convolutional layers (C1, C2, C3, C4), four pooling layers (P1, P2, P3, P4), and one fully connected layer (FC) to prevent overfitting and assure accuracy and training speed simultaneously.
The convolutional layer consists of multiple learnable convolution filters that work in parallel. These filters operate on small local receptive fields of input data in a sliding window manner. The convolution filter operation can be expressed as:
y i j = ( I * w ) i j = m = 0 l 1 1 n = 0 l 2 1 w m , n · I i + m , j + n ,
where I represents the input image of size i × j , w represents filter matrix of size l 1 × l 2 , and y is the output image of the same size as the input image. The nonlinear activation function, a rectified linear unit (ReLU), is imposed on the filter output to enable the nonlinear transformation of data space so that the discrimination among different classes is made easier. The pooling layer performs downsampling to reduce the data size. The final prediction can become robust to the translation of input data through pooling. The pooling operation used here is maximum pooling. The combination of the convolutional layer, activation function, and pooling is considered as one layer in the DCNN, and a modern DCNN employs multiple such layers [40]. The fully connected layer integrates the 2D feature maps extracted from previous layers into a 1D feature vector. It consists of neurons completely connected to the preceding layers.

5.2. DCNN Parameter Selection

The TensorFlow software, which is an open-source library for deep learning and artificial intelligence, is used to simulate the DCNN architecture. The flexibility of library and support provided by the Tensorflow community makes the software a suitable choice for a coding environment [41]. Parameter selection for the DCNN architecture is shown in Table 2.

5.3. DCNN Training Procedure

DCNN training is an optimization procedure that involves training the CNN weights and biases with the objective of minimizing the MSE. A single iteration of the DCNN learning process undergoes the following steps:
  • Input a mini-batch of n training samples to the CNN architecture.
  • Compute the CNN predicted output using the feed-forward technique by passing the input completely through the CNN architecture.
  • Compute the gradients of the error function with respect to trainable parameters. Gradient information E w based on the backpropagation algorithm is defined as [42],
    E w = E w c E b c E w f E b f ,
    where trainable parameters w c , w f and b c , b f represent the filter weights and biases of convolutional and fully connected layers, respectively.
  • Update the filter weights and biases based on the ADAM optimization algorithm [42].
For m training samples, the DCNN training procedure is applied to all samples in mini-batches of n samples. A total of 80% of input data as spectrogram images is used for the DCNN training procedure, whereas 20% of input data are used for the testing/validation procedure.

6. Simulation Results

Based on MOCAP data, performance evaluation simulations for the proposed human activity classification algorithm have been presented. Performance evaluation matrices include the classification accuracy and confusion matrices. Measuring the classification accuracy requires the use of a classification model for making predictions of all the examples in the test dataset. This process is followed by comparing these predictions with known labels in that test dataset. Next, the accuracy is computed as the ratio of correct and total predictions for the test dataset.
C l a s s i f i c a t i o n   A c c u r a c y = C o r r e c t P r e d i c t i o n s T o t a l   P r e d i c t i o n s
Three different network configurations for performance comparison include:
  • Micro-Doppler signatures only;
  • Interferometric micro-motion signatures only;
  • Dual micro-motion signatures.

6.1. Classification among Seven Human Activities

For classification among seven different human activities, first, the micro-Doppler signatures-based DCNN classification model is implemented and a classification accuracy level of approximately 92% is achieved, as shown in Figure 9a.
Second, interferometric micro-motion signatures-based DCNN is implemented, which yields a classification accuracy level of approximately 94%, as shown in Figure 9b. The reason for increased accuracy in the second configuration is that since the human is performing activities parallel to the radar baseline, which is almost tangential to the observing radar, micro-Doppler signatures diminish significantly, proving insufficient features for classification. On the contrary, tangential movement causes maximum interferometric frequency shifts in the interferometric output, providing sufficient information for classification in the form of interferometric micro-motion signatures. The third configuration of dual micro-motion signatures incorporates the information provided by both the micro-Doppler signatures and interferometric micro-motion signatures. Therefore, the human activity classification algorithm based on dual micro-motion signatures using a DCNN classifier outperforms the first two configurations based on either the micro-Doppler or interferometric micro-motion signatures. In this configuration, a classification accuracy level of 98% is achieved, as shown in Figure 9c. The classification accuracies of the three configurations are listed in Table 3.
Confusion matrices of three different configurations for classification among seven different human activities are represented in Figure 10. As the confusion matrix expresses the comparison of true and predicted values, the efficiency of the proposed algorithm can be visualized.

6.2. Classification among Four Different Walking Patterns

Four different types of walking patterns considered in this work include the normal walk, slow walk, random walk, and walk on uneven terrain. MOCAP data-based radar echoes were simulated to generate the dataset for four different walking styles.
In order to enhance the performance, different layers and filter sizes were adjusted in the DCNN architecture. Micro-Doppler signatures-based configuration has a classification accuracy level of 90%, as shown in Figure 11a, and interferometric micro-motion signatures-based configuration yields a classification accuracy level of 92%, represented in Figure 11b. Dual micro-motion signatures-based configuration outperforms the first two configurations with a classification accuracy level of 95%, as represented in Figure 11c. Table 4 lists the classification accuracy level of different walking patterns. The confusion matrices for classification among four different walking patterns are shown in Figure 12.

7. Experimental Results

In this section, a real data set of human walking is analyzed using the proposed algorithm. An AWR1642 mmWave FMCW Radar is used in dataset generation. An evaluation board of an AWR1642 BoosterPack provided by Texas Instruments is user-friendly for the AWR1642 mmWave sensor. It was directly connected to the microcontroller LaunchPad Development Kit. The evaluation board facilitates the programming and debugging of low-power ARM R4F controllers and C67x DSP core. Six different activities of 29 individuals with multiple rounds were recorded to generate the dataset with 231 acquisitions. During their walk, the individuals did not follow specific patterns, to make the data useful both for gait classification and performance evaluation of various data processing techniques. The data were recorded for individuals walking with different styles along an empty hall at Marche Polytechnic University’s Information Engineering Department. A 12 m long hall was obstacle-free. For each single experiment, an individual entered the testing range first moved away from and then came towards the observing radar. The gait pattern was kept natural so that the dataset generated seemed real. Different human walking activities considered in this study include:
(1)
Fast walk
(2)
Slow walk
(3)
Slow walk with hands in pockets
(4)
Slow walk with swinging hands
(5)
Walk with hiding bottle
(6)
Walk with a limp
Time-frequency analysis was performed to generate micro-Doppler and interferometric signatures. As there are four receivers available in the radar set, a spectrogram was taken on each set of data received; hence, 04 image micro-Doppler spectrograms were generated for one cycle. For interferometric signatures, a correlation between 1–2, 2–3, and 3–4 was performed, and then a spectrogram was taken. Hence, three signatures of interferometric spectrogram were generated. The same procedure was performed for all six activities’ data sets. It was observed that fast and slow walking patterns can easily be distinguished. However, the activity with hands in pockets presents slightly small micro-Doppler signatures as compared to the walking with swinging hands. However, this effect is scarcely noticeable.
A walk with limping activity can easily be recognized, as its Doppler signatures are different from the rest of the walking patterns. As the human movement comes towards and away from the radar, the received Doppler signatures are prominent. However, there is also a slight difference in interferometric signatures of different human walking styles. Micro-Doppler and interferometric micro-motion signatures obtained by performing STFT with the received dataset are presented in Figure 13 and Figure 14, respectively.
In the DCNN architecture, different layers and filter sizes are adjusted to obtain the best output performance. Micro-Doppler signatures-based configuration has an accuracy level of 83%, as represented in Figure 15a, and interferometric micro-motion signatures-based configuration shows a classification accuracy level of 80%, represented in Figure 15b. Meanwhile, the dual micro-motion signatures-based configuration outperforms the first two configurations with an accuracy level of 90%, as represented in Figure 15c. The classification accuracy level of different walking patterns and scenarios is listed in Table 5. The confusion matrices for classification among six different walking patterns with different scenarios are shown in Figure 16.

8. Conclusions

This work presented an algorithm for human activity classification based on dual micro-motion signatures using an interferometric radar. For aspect angles greater than 60°, the micro-Doppler signatures diminish significantly, providing insufficient data for classification. Therefore, we proposed human activity classification based on both the micro-Doppler and interferometric micro-motion signatures obtained from an interferometric radar. The motion of different parts of the human body was generated using a MOCAP dataset, which was further used to simulate a radar echo signal. Time-varying Doppler and interferometric spectrograms obtained by time-frequency analysis of a single Doppler channel and interferometric output, respectively, were fed as input to the DCNN classifier. Performance evaluation simulations were presented for three different system configurations. A classification accuracy level of approximately 98% was achieved for classification among seven different activities, when dual micro-motion signatures were used to train the classifier. Verification of the proposed dual micro-motion signatures-based DCNN classifier algorithm was also performed using a real radar dataset, and an accuracy level of 90% was achieved when a dual micro-motion signatures-based configuration was used. It is observed that the dual micro-motion signatures-based configuration outperformed micro-Doppler or interferometric micro-motion signatures-based configurations. Hence, it can be concluded that the proposed algorithm is capable of classifying different human activities in scenarios where Doppler signatures do not provide sufficient information for classification.

Author Contributions

Conceptualization, S.H. and X.W.; methodology, S.H. and S.I.; software, S.H.; validation, S.H., S.I.; formal analysis, S.H.; investigation, X.W.; resources, S.H.; data curation, S.I.; writing original draft preparation, S.I.; A writing review and editing, N.U., A.M. and A.N.; supervision, X.W.; project administration, X.W.; funding acquisition, A.N. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number: IFP22UQU4290235DSR257.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gupta, A.; Gupta, K.; Gupta, K.; Gupta, K. A Survey on Human Activity Recognition and Classification. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020; pp. 0915–0919. [Google Scholar]
  2. Gurbuz, S.Z.; Rahman, M.M.; Kurtoglu, E.; Macks, T.; Fioranelli, F. Cross-Frequency Training with Adversarial Learning for Radar Micro-Doppler Signature Classification. Proc. SPIE 2020, 11408, 58–68. [Google Scholar]
  3. Dorp, P.V.; Groen, F.C.A. Human Walking Estimation with Radar. IEE Proc. Radar Sonar Navig. 2003, 150, 356–365. [Google Scholar] [CrossRef]
  4. Nanzer, J.A. A Review of Microwave Wireless Techniques for Human Presence Detection and Classification. IEEE Trans. Microw. Theory Tech. 2017, 65, 1780–1794. [Google Scholar] [CrossRef]
  5. Setlur, P.; Amin, M.G.; Ahmad, F. Urban Target Classifications Using Time-Frequency Micro-Doppler Signatures. In Proceedings of the 2007 9th International Symposium on Signal Processing and Its Applications, Sharjah, United Arab Emirates, 12–15 February 2007; pp. 1–4. [Google Scholar]
  6. Anderson, M.G.; Rogers, R.L. Micro-Doppler Analysis of Multiple Frequency Continuous Wave Radar Signatures. In Proceedings of the Radar Sensor Technology XI, Orlando, FL, USA, 9–13 April 2007; Volume 6547, pp. 92–101. [Google Scholar]
  7. Otero, M. Application of a Continuous Wave Radar for Human Gait Recognition. In Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition XIV, Orlando, FL, USA, 28 March–1 April 2005; Volume 5809, pp. 538–549. [Google Scholar]
  8. Bilik, I.; Tabrikian, J.; Cohen, A. GMM-Based Target Classification for Ground Surveillance Doppler Radar. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 267–278. [Google Scholar] [CrossRef]
  9. Smith, G.E.; Woodbridge, K.; Baker, C.J. Naive Bayesian Radar Micro-Doppler Recognition. In Proceedings of the 2008 International Conference on Radar, Adelaide, Australia, 2–5 September 2008; pp. 111–116. [Google Scholar]
  10. Kim, Y.; Ling, H. Human Activity Classification Based on Micro-Doppler Signatures Using a Support Vector Machine. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1328–1337. [Google Scholar]
  11. Yao, X.; Shi, X.; Zhou, F. Human activity classification Based on Complex-Value Convolutional Neural Network. IEEE Sens. J. 2020, 20, 7169–7180. [Google Scholar] [CrossRef]
  12. Moulton, M.C.; Bischoff, M.L.; Benton, C.; Petkie, D.T. Micro-Doppler Radar Signatures of Human activity. In Proceedings of the Millimetre Wave and Terahertz Sensors and Technology III, Toulouse, France, 20–23 September 2010; Volume 7837. [Google Scholar]
  13. Tekeli, B.; Gurbuz, S.Z.; Yuksel, M.; Gürbüz, A.C.; Guldogan, M.B. Classification of Human Micro-Doppler in a Radar Network. In Proceedings of the 2013 IEEE Radar Conference (RadarCon13), Ottawa, ON, Canada, 29 April–3 May 2013; pp. 1–6. [Google Scholar]
  14. Cornacchia, M.; Ozcan, K.; Zheng, Y.; Velipasalar, S. A Survey on Activity Detection and Classification Using Wearable Sensors. IEEE Sens. J. 2017, 17, 386–403. [Google Scholar] [CrossRef]
  15. Tahmoush, D. Review of Micro-Doppler Signatures. IET Radar Sonar Navig. 2015, 9, 1140–1146. [Google Scholar] [CrossRef]
  16. Du, R.; Fan, Y.; Wang, J. Pedestrian and Bicyclist Identification Through Micro-Doppler Signature with Different Approaching Aspect Angles. IEEE Sens. J. 2018, 18, 3827–3835. [Google Scholar] [CrossRef]
  17. Tahmoush, D.; Silvious, J. Angle, Elevation, PRF, and Illumination in Radar Micro-Doppler for Security Applications. In Proceedings of the 2009 IEEE Antennas and Propagation Society International Symposium, North Charleston, SC, USA, 1–5 June 2009; pp. 1–4. [Google Scholar]
  18. Anderson, M.G. Design of Multiple Frequency Continuous Wave Radar Hardware and Micro-Doppler Based Detection and Classification Algorithms. Ph.D. Thesis, The University of Texas, Austin, TX, USA, 2008. [Google Scholar]
  19. Hassan, S.; Wang, X.; Ishtiaq, S. Human Gait Classification Based on Convolutional Neural Network using Interferometric Radar. In Proceedings of the 2021 International Conference on Control, Automation and Information Sciences (ICCAIS), Xi’an, China, 14–17 October 2021; pp. 450–456. [Google Scholar]
  20. Özcan, M.B.; Gürbüz, S.Z.; Persico, A.R.; Clemente, C.; Soraghan, J. Performance Analysis of Co-Located and Distributed MIMO Radar for Micro-Doppler Classification. In Proceedings of the 2016 European Radar Conference (EuRAD), London, UK, 5–7 October 2016; pp. 85–88. [Google Scholar]
  21. Smith, G.E.; Woodbridge, K.; Baker, C.J. Multistatic Micro-Doppler Signature of Personnel. In Proceedings of the 2008 IEEE Radar Conference, Rome, Italy, 26–30 May 2008. [Google Scholar]
  22. Fairchild, D.P.; Narayanan, R.M. Multistatic Micro-Doppler Radar for Determining Target Orientation and Activity Classification. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 512–521. [Google Scholar] [CrossRef]
  23. Qiao, X.; Li, G.; Shan, T.; Tao, R. Human Activity Classification Based on Moving Orientation Determining Using Multistatic Micro-Doppler Radar Signals. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5104415. [Google Scholar] [CrossRef]
  24. Nanzer, J.A. Millimeter-Wave Interferometric Angular Velocity Detection. IEEE Trans. Microw. Theory Tech. 2010, 58, 4128–4136. [Google Scholar] [CrossRef]
  25. Ishtiaq, S.; Wang, X.; Hassan, S. Detection and Tracking of Multiple Targets Using Dual-Frequency Interferometric Radar. In Proceedings of the IET International Radar Conference (IET IRC 2020), Online, 4–6 November 2020; pp. 468–475. [Google Scholar]
  26. Wang, X.; Li, W.; Chen, V.C. Hand Gesture Recognition Using Radial and Transversal Dual Micro-Motion Features. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 5963–5973. [Google Scholar] [CrossRef]
  27. Nanzer, J.A. Micro-Motion Signatures in Radar Angular Velocity Measurements. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016; pp. 1–4. [Google Scholar]
  28. Boulic, R.; Thalmann, N.M.; Thalmann, D. A Global Human Walking Model with Real-Time Kinematic Personification. Vis. Comput. 1990, 6, 344–358. [Google Scholar] [CrossRef]
  29. Ram, S.S.; Ling, H. Simulation of Human Micro-Doppler Using Computer Animation Data. In Proceedings of the IEEE Radar Conference, Rome, Italy, 26–30 May 2008; pp. 1–6. [Google Scholar]
  30. Kolekar, M.H.; Dash, P.D. Hidden Markov Model Based Human Activity Recognition Using Shape and Optical Flow Based Features. In Proceedings of the 2016 IEEE Region 10 Conference (TENCON), Singapore, 22–25 November 2016; pp. 393–397. [Google Scholar]
  31. Singh, V.K.; Nevatia, R. Human Action Recognition Using a Dynamic Bayesian Action Network with 2D Part Models. In Proceedings of the Seventh Indian Conference on Computer Vision, Graphics and Image Processing, Chennai, India, 22–25 November 2010; pp. 17–24. [Google Scholar]
  32. Milanova, M.; Al-Ali, S.; Manolova, A. Human Action Recognition Using Combined Contour-Based and Silhouette-Based Features and Employing KNN or SVM Classifier. Int. J. Comput. 2015, 9, 37–47. [Google Scholar]
  33. Smith, K.A.; Csech, C.; Murdoch, D.; Shaker, G. Gesture Recognition Using mm-Wave Sensor for Human-Car Interface. IEEE Sens. Lett. 2018, 2, 1–4. [Google Scholar] [CrossRef]
  34. Li, X.; He, Y.; Jing, X. A Survey of Deep Learning-Based Human Activity Recognition in Radar. Remote Sens. 2019, 11, 1068. [Google Scholar] [CrossRef] [Green Version]
  35. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.R.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N.; et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
  36. Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef] [Green Version]
  37. Deng, L.; Yu, D. Deep Learning: Methods and Applications. Found. Trends Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef] [Green Version]
  38. Amin, M.G.; Erol, B. Understanding Deep Neural Networks Performance for Radar-Based Human Motion Recognition. In Proceedings of the 2018 IEEE Radar Conference (RadarConf18), Oklahoma City, OK, USA, 23–27 April 2018; pp. 1461–1465. [Google Scholar]
  39. Lin, Y.; Le, K.J.; Yang, S.; Fioranelli, F.; Romain, O.; Zhao, Z. Human Activity Classification with Radar: Optimization and Noise Robustness with Iterative Convolutional Neural Networks Followed with Random Forests. IEEE Sens. J. 2018, 18, 9669–9681. [Google Scholar] [CrossRef] [Green Version]
  40. Kim, Y.; Moon, T. Human Detection and Activity Classification Based on Micro-Doppler Signatures Using Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 8–12. [Google Scholar] [CrossRef]
  41. Yilmaz, E.; German, B. A Deep Learning Approach to an Airfoil Inverse Design Problem. In Proceedings of the 2018 Multidisciplinary Analysis and Optimization Conference, Atlanta, Georgia, 25–29 June 2018. [Google Scholar]
  42. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
Figure 1. The proposed human activity classification method based on dual micro-motion signatures.
Figure 1. The proposed human activity classification method based on dual micro-motion signatures.
Remotesensing 15 01752 g001
Figure 2. Illustration of different human activities.
Figure 2. Illustration of different human activities.
Remotesensing 15 01752 g002
Figure 3. The interferometric radar schematic diagram.
Figure 3. The interferometric radar schematic diagram.
Remotesensing 15 01752 g003
Figure 4. Micro-Doppler spectrogram (left to right).
Figure 4. Micro-Doppler spectrogram (left to right).
Remotesensing 15 01752 g004
Figure 5. Interferometric spectrogram (left to right).
Figure 5. Interferometric spectrogram (left to right).
Remotesensing 15 01752 g005
Figure 6. Dual micro-motion signatures of slow walk.
Figure 6. Dual micro-motion signatures of slow walk.
Remotesensing 15 01752 g006
Figure 7. Dual micro-motion signatures of normal walk.
Figure 7. Dual micro-motion signatures of normal walk.
Remotesensing 15 01752 g007
Figure 8. Image pre-processing.
Figure 8. Image pre-processing.
Remotesensing 15 01752 g008
Figure 9. Classification accuracy among seven different human activities. (a) Micro-Doppler signatures-based classification accuracy; (b) interferometric signatures-based classification accuracy; (c) dual micro-motion signatures-based classification accuracy.
Figure 9. Classification accuracy among seven different human activities. (a) Micro-Doppler signatures-based classification accuracy; (b) interferometric signatures-based classification accuracy; (c) dual micro-motion signatures-based classification accuracy.
Remotesensing 15 01752 g009
Figure 10. Confusion matrix for seven different human activities. (a) Confusion matrix for micro-Doppler signatures; (b) confusion matrix for interferometric signatures; (c) confusion matrix for dual micro-motion signatures.
Figure 10. Confusion matrix for seven different human activities. (a) Confusion matrix for micro-Doppler signatures; (b) confusion matrix for interferometric signatures; (c) confusion matrix for dual micro-motion signatures.
Remotesensing 15 01752 g010
Figure 11. Classification accuracy among four different walking patterns. (a) Micro-Doppler signatures-based classification accuracy; (b) interferometric signatures-based classification accuracy; (c) dual micro-motion signatures-based classification accuracy.
Figure 11. Classification accuracy among four different walking patterns. (a) Micro-Doppler signatures-based classification accuracy; (b) interferometric signatures-based classification accuracy; (c) dual micro-motion signatures-based classification accuracy.
Remotesensing 15 01752 g011
Figure 12. Confusion matrix for four different walking patterns. (a) Confusion matrix for micro-Doppler signatures; (b) confusion matrix for interferometric signatures; (c) confusion matrix for dual micro-motion signatures.
Figure 12. Confusion matrix for four different walking patterns. (a) Confusion matrix for micro-Doppler signatures; (b) confusion matrix for interferometric signatures; (c) confusion matrix for dual micro-motion signatures.
Remotesensing 15 01752 g012
Figure 13. Doppler spectrogram (left to right).
Figure 13. Doppler spectrogram (left to right).
Remotesensing 15 01752 g013
Figure 14. Interferometric spectrogram (left to right).
Figure 14. Interferometric spectrogram (left to right).
Remotesensing 15 01752 g014aRemotesensing 15 01752 g014b
Figure 15. Classification accuracy among six different walking patterns. (a) Micro-Doppler signatures-based classification accuracy; (b) interferometric signatures-based classification accuracy; (c) dual micro-motion signatures-based classification accuracy.
Figure 15. Classification accuracy among six different walking patterns. (a) Micro-Doppler signatures-based classification accuracy; (b) interferometric signatures-based classification accuracy; (c) dual micro-motion signatures-based classification accuracy.
Remotesensing 15 01752 g015
Figure 16. Confusion matrix for six different walking patterns. (a) Confusion matrix for micro-Doppler signatures; (b) confusion matrix for interferometric signatures; (c) confusion matrix for dual micro-motion signatures.
Figure 16. Confusion matrix for six different walking patterns. (a) Confusion matrix for micro-Doppler signatures; (b) confusion matrix for interferometric signatures; (c) confusion matrix for dual micro-motion signatures.
Remotesensing 15 01752 g016aRemotesensing 15 01752 g016b
Table 1. Seven human activities.
Table 1. Seven human activities.
ActivityDescription
WalkingThe action of walking in forward direction with both upper and lower limbs moving. It includes;
(i) Normal walk
(ii) Slow walk
(iii) Walk on uneven terrain
(iv) Wander (random walk)
RunningThe action of running swiftly in forward direction with both upper and lower limbs moving. It includes;
(i) Normal run
(ii) Jog
JumpingThe action of springing free from ground into the air by using lower limbs. It includes;
(i) Simple jump
(ii) High jump
(iii) Forward jump
PunchingThe action of striking or hitting with fists. It includes;
(i) Simple punch
(ii) Boxing
BendingThe action of assuming an angular or curved shape, that is, bend over and pick up with one hand.
ClimbingThe action of ascending and then descending, that is, climb a ladder and then move downward.
Sitting/standingThe action of changing posture between sitting and standing positions.
Table 2. DCNN architecture and parameter selection.
Table 2. DCNN architecture and parameter selection.
Motion TypeConvolution LayersFiltersPooling LayersPooling TypeNeuronsOutput of DCNN
Activity
class
04C1:32 (5 × 5)
C2:64 (3 × 3)
C3:128 (3 × 3)
C4:512 (3 × 3)
04
(P1, P2, P3, P4)
Max
(2 × 2)
18,432(i) Walking
(ii) Running
(iii) Jumping
(iv) Punching
(v) Bending
(vi) Climbing
(vii) Sitting/Standing
Walking
patterns
03C1:32 (5 × 5)
C2:64 (3 × 3)
C3:128 (3 × 3)
03
(P1, P2, P3, P4)
Max
(2 × 2)
25,088(i) Normal
(ii) Slow
(iii) uneven terrain
(iv) Random
Table 3. Classification accuracy level of different human activities.
Table 3. Classification accuracy level of different human activities.
DescriptionConfiguration 1Configuration 2Configuration 3
Classification accuracy92%94%98%
Table 4. Classification accuracy level of different walking patterns.
Table 4. Classification accuracy level of different walking patterns.
DescriptionConfiguration 1Configuration 2Configuration 3
Classification accuracy90%92%95%
Table 5. Classification accuracy level of different walking patterns.
Table 5. Classification accuracy level of different walking patterns.
DescriptionConfiguration 1Configuration 2Configuration 3
Classification accuracy83%80%90%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hassan, S.; Wang, X.; Ishtiaq, S.; Ullah, N.; Mohammad, A.; Noorwali, A. Human Activity Classification Based on Dual Micro-Motion Signatures Using Interferometric Radar. Remote Sens. 2023, 15, 1752. https://doi.org/10.3390/rs15071752

AMA Style

Hassan S, Wang X, Ishtiaq S, Ullah N, Mohammad A, Noorwali A. Human Activity Classification Based on Dual Micro-Motion Signatures Using Interferometric Radar. Remote Sensing. 2023; 15(7):1752. https://doi.org/10.3390/rs15071752

Chicago/Turabian Style

Hassan, Shahid, Xiangrong Wang, Saima Ishtiaq, Nasim Ullah, Alsharef Mohammad, and Abdulfattah Noorwali. 2023. "Human Activity Classification Based on Dual Micro-Motion Signatures Using Interferometric Radar" Remote Sensing 15, no. 7: 1752. https://doi.org/10.3390/rs15071752

APA Style

Hassan, S., Wang, X., Ishtiaq, S., Ullah, N., Mohammad, A., & Noorwali, A. (2023). Human Activity Classification Based on Dual Micro-Motion Signatures Using Interferometric Radar. Remote Sensing, 15(7), 1752. https://doi.org/10.3390/rs15071752

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop