Next Article in Journal
Design, Characterization, and Performance of Woven Fabric Electrodes for Electrocardiogram Signal Monitoring
Previous Article in Journal
Hybrid Clustering and Routing Algorithm with Threshold-Based Data Collection for Heterogeneous Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards a Low-Cost Solution for Gait Analysis Using Millimeter Wave Sensor and Machine Learning

1
Department of Electrical and Computer Engineering, University of Dayton, 300 College Park, Dayton, OH 45469, USA
2
Department of Physical Therapy, University of Dayton, 300 College Park, Dayton, OH 45469, USA
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(15), 5470; https://doi.org/10.3390/s22155470
Submission received: 2 July 2022 / Revised: 19 July 2022 / Accepted: 21 July 2022 / Published: 22 July 2022
(This article belongs to the Section Physical Sensors)

Abstract

:
Human Activity Recognition (HAR) that includes gait analysis may be useful for various rehabilitation and telemonitoring applications. Current gait analysis methods, such as wearables or cameras, have privacy and operational constraints, especially when used with older adults. Millimeter-Wave (MMW) radar is a promising solution for gait applications because of its low-cost, better privacy, and resilience to ambient light and climate conditions. This paper presents a novel human gait analysis method that combines the micro-Doppler spectrogram and skeletal pose estimation using MMW radar for HAR. In our approach, we used the Texas Instruments IWR6843ISK-ODS MMW radar to obtain the micro-Doppler spectrogram and point clouds for 19 human joints. We developed a multilayer Convolutional Neural Network (CNN) to recognize and classify five different gait patterns with an accuracy of 95.7 to 98.8% using MMW radar data. During training of the CNN algorithm, we used the extracted 3D coordinates of 25 joints using the Kinect V2 sensor and compared them with the point clouds data to improve the estimation. Finally, we performed a real-time simulation to observe the point cloud behavior for different activities and validated our system against the ground truth values. The proposed method demonstrates the ability to distinguish between different human activities to obtain clinically relevant gait information.

1. Introduction

The United Nations Population Fund (UNFPA) report states that the population of older adults (>60) will increase to 2 billion by 2050 [1]. This population is more prone to developing neurodegenerative diseases, becoming frail, and having falls [2]. Continuous monitoring of mobility and activity levels could provide valuable data to help mitigate the impacts of disease and disability [3]. However, it is difficult to monitor one or more individuals continuously and unobtrusively at home or in institutional settings. Fortunately, recent technological advancements have paved the way for easier remote monitoring [4,5]. In this context, remote health monitoring by leveraging Machine Learning (ML) and Artificial Intelligence (AI) methods is becoming more common [6]. Such technologies can provide the ability to monitor patients remotely, provide health reports to doctors, and detect potentially serious events, such as falls [7,8].
Important applications of remote health monitoring include continuous monitoring of gait and recognition of different walking patterns and activities and events, such as sitting, standing, laying, and falls [9]. Currently, gait analysis requires an evaluation by a healthcare professional in a clinical setting, which can be costly and time-consuming. Remote gait analysis could help to detect changes and deviations from normal walking, which could help health professionals to assess injury, illness severity, or recovery over extended periods of time [10]. Human Activity Recognition (HAR) is the method of identifying or classifying different human activities, such as sitting, walking, running, or falling. To date, various technologies have been used for HAR, such as wearable sensors, cameras, and radars [11,12,13,14]. In general, HAR involves monitoring activity over extended periods of time; however, sensors and wearables are impacted by their limited battery life [15]. Wearable sensors also require users to remember to use them regularly [16]. Video cameras can provide monitoring over extended periods of time, but their performance depends on a clear line-of-sight and can be impacted by lighting and climate conditions [17]. Further, video cameras have privacy concerns for the user, making them less optimal for HAR [18].
Millimeter-Wave (MMW) radars have shown potential for HAR with high resolution using their broad bandwidth of operating frequencies from 30 to 300 GHz [19]. MMW radar can overcome the battery limitations of wearables and the privacy concerns of cameras when used as a plug-in device attached to a wall power outlet and can operate in a variety of climate and lighting conditions [20]. MMW radar generates a 3D point cloud representation of joints that can be recognized as human locomotion using AI algorithms. Currently, most HAR devices and clinical measures of gait focus on spatiotemporal parameters such as gait speed, cadence, and distance [21,22]. MMW could provide additional information about the quality of gait that is otherwise difficult to measure in community or free-living environments. Based on the potential benefits of MMW, this paper aims to present its use for human gait analysis using ML algorithms. The novel contributions of this paper include:
  • The use of MMW radar and ML to recognize and classify different abnormal gait patterns commonly associated with frailty and disability, including walking with a limp, walking with a stooped posture, and walking with an assistive device.
  • The use of a low-cost MMW radar prototype with competitive object localization and detection accuracy that can be used in both community and clinical settings.
  • The use of micro-Doppler signatures and skeleton pose estimation techniques to achieve a gait pattern classification accuracy of 95.7 to 98.8%.
The rest of the paper is organized as follows: Section 2 provides the background and literature review. Section 3 explains the proposed MMW system, and the experimental setup is presented in Section 4. Section 5 discusses the results and discussion. Finally, Section 6 concludes the work.

2. Background and Literature Review

Gait analysis examines the different postures of a human body between consecutive foot strikes of the same foot (gait cycle). Motion analysis laboratories use infrared motion capture cameras and force plates, which are the current gold standard for gait analysis; however, they are impractical for long-term monitoring in community or general clinical settings because of their cost and scale. While laboratory gait analysis can accurately measure various parameters, such as joint angles, muscle force, step/stride length, body posture, and ground reaction forces [23], it does not automatically recognize or classify gait into commonly known abnormal patterns.
To overcome some of the limitations of laboratory-based gait analysis, wearable sensors such as accelerometers, gyroscopes, and magnetometers are becoming more common for gait analysis. The advantage of wearables over other techniques is their ability to take readings in free-living environments [23]. Furthermore, wearables are small and inexpensive compared to other technologies. Zebin et al. [24] used data from five lower body inertial measurement units (IMUs) and the Convolutional Neural Network (CNN) to accurately classify activities such as walking, stairs, sitting, standing, and lying. However, wearable sensors cannot currently recognize and classify different abnormal walking patterns.
Surface electromyography (sEMG) is commonly used during gait analysis and gives important information regarding muscle activity [25]. Rescio et al. [26] showed that sEMG improves pre-fall detection accuracy compared with wearable IMUs. However, even slightly incorrect placement of sEMG electrodes drastically reduces the detection accuracy. These challenges make sEMG impractical for long-term community-based gait analysis.
Portable gait mats placed on a hard floor have been used for gait analysis. These mats are equipped with pressure sensors that measure ground reaction forces. The mats are low cost, portable, and noninvasive. However, they are typically only 3–5 m in length and not applicable to community environments.
Acoustic tracking using ultrasonic pulses has been used in some studies [27,28]. However, it does not provide optimal results in the presence of noise. This technique also requires a direct line of sight [29,30].
Vision-based systems use a motion camera to capture a subject’s gait and translate it to a 3D computerized object [31,32]. Therefore, such systems are widely used to examine gait by measuring joint angles, joint locations, and mobility in 2D and 3D. Privacy is a significant concern with vision-based systems [9].
Zhao et al. [33] presented RF-Pose for human pose estimation. It estimates the human pose by creating a 2D dynamic skeleton stick figure during walking activity. However, it is challenging to predict and distinguish hand movements and joints using a 2D stick figure, especially when a subject is walking with external assistance, for example, using a cane or walker.
MMW radar has emerged as a promising technology that can complement or replace other gait analysis and HAR systems over extended periods of time. MMW radar is low-cost and noninvasive in nature and can work under a variety of operational conditions [34]. Furthermore, it offers remote monitoring without direct line of sight. MMW radar generates point cloud data, reducing privacy concerns, and the data can be processed using AI and ML algorithms for HAR. Sengupta et al. [35] proposed a voxelization method using radar and Natural Language Processing for pose estimation. That study estimated 25 skeleton points using MMW radar. One concern is the increased computation cost due to the high dimensionality of the input data. An and Ogras [36] reduced the input data dimension by mapping 5D time-series point cloud data from MMW radar to a lower dimension. They then used the convolution neural network (CNN) to estimate human joints accurately. In addition, micro-Doppler signal components can help to reveal gait information. For example, considering a walking person, the Time-Frequency Representations (TFR) of the radar backscattering depict the micro-Doppler components due to swinging arms and legs [37]. A normal gait generates a periodic signal in its TFR. However, changes in gait, such as falling, generate a Doppler shift that can help with HAR.
The proposed method combines skeleton pose estimation and micro-Doppler signatures to recognize and accurately classify five different gait patterns. The following gait patterns were used for our study because they represent common abnormal patterns among older adults.
(1)
Normal Gait—Walking normally with good posture and without an assistive device.
(2)
Limping—Many gait deficits lead to asymmetry in movement; limping is a common gait disorder characterized by asymmetry in step/stride length and lateral trunk movement. A similar pattern can be seen in people with hemiparesis due to disorders such as stroke, Multiple Sclerosis (MS), and brain injury [38].
(3)
Stooped posture—A stooped posture is common in frail persons who have difficulty overcoming the postural demands of gravity. A stooped posture is also common with various neurological disorders, such as dementia and Parkinson’s Disease [39].
(4)
Using a walker—Many people with gait and balance disorders use walkers. The ability to detect whether someone is using a walker can help to determine whether they are complying with their prescribed use or have adopted the device independently or whether they were using it at the time of a fall [40]. We used a front-wheel rolling walker in this work, because these are very commonly used in home and institutional settings.
(5)
Using a cane—Many people with gait and balance disorders use canes. The ability to detect whether someone is using a cane can help to determine whether they are complying with their prescribed use or have adopted the device independently [40].

3. Proposed MMW Radar Gait System

The proposed MMW radar gait system is a low-cost solution for distinguishing different human activities and gait patterns. It consists of several steps. First, the system generates the micro-Doppler signatures based on point cloud data from the MMW radar. Then, a CNN algorithm identifies different human activities using micro-Doppler as an input. To train the CNN algorithm to accurately estimate the locations of human joints, we used 3D joint coordinates obtained from the Microsoft Kinect V2 sensor as ground truth. Then, the trained model reconstructed 19 human joints and their skeleton from the point cloud generated by the MMW radar. Figure 1 illustrates the flow diagram of the MMW radar gait system.

3.1. Detection Using Micro-Doppler Signatures

The MMW signals from the radar generate various scattering points after reflecting from a human body. The scattering points of different body parts create the point cloud specific to that part of the body. Generally, these points have different velocities or Doppler shifts because of movements, such as the movement of legs or arms. For example, the right leg will have velocity in the opposite direction to the left leg during a walk or run. Therefore, the Doppler of every point cloud will be different for different body parts. Hence, each activity creates a unique micro-Doppler signature calculated using the velocity of scattered points over time. For this study, we used the Texas Instruments (TI) IWR6843ISK-ODS MMW radar, as shown in Figure 2 [41].

TI IWR6843ISK-ODS MMW Radar

The TI IWR6843ISK-ODS MMW radar uses Frequency-Modulated Continuous-Wave (FMCW) to precisely measure the velocity range and angle by sending continuous frequency-modulated signals. It consists of three transmitting and four receiving antennas with a 120° field of view with a range of approximately 12 m, enough to cover areas of various residential and clinical settings.
The micro-Doppler Signatures steps are illustrated in Figure 3. At the start, the IWR6843ISK-ODS MMW radar sends multiple FMCW chirps with a 60.75 GHz carrier frequency. The chirps help to calculate different parameters, such as velocity, range, and angle. For example, velocity is calculated by finding the Doppler shift across Coherent Processing intervals (CPIs). In contrast, beamforming using multiple antennas helps to analyze the angle. In the second step, the measured velocity, range, and angle are transformed into a 3D data cube, as illustrated in Figure 3. Radar processing consists of five steps:
  • Range and velocity estimation is performed in the first step using the Fast Fourier Transform (FFT).
  • Moving Target Indication (MTI) removes the static clutter points (surrounding reflections) from the data.
  • The Constant False Alarm Rate (CFAR) detects static points among the noisy data in the fourth step.
  • The fifth step includes angle estimation using the FFT.
  • Finally, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is applied to separate the scattered points into different categories, such as regular walking or walking with a limp [42].
The Kalman filter tracks subjects’ movements and associates relevant scattered points with different activities, thus making point clouds for each activity.
We used a sliding window to collect Doppler features over time. In the proposed method, the attenuation of the scatter point intensity is compensated for due to the range effect and normalized to one. After that, these points are fed to the CNN for training. The normalized point cloud’s Doppler pattern acts as a signature for different activities and helps to classify different activities. The micro-Doppler signature of each activity differs, as shown in Figure 4.
The last step of this approach is to distinguish five different activities based on the micro-Doppler signature of each activity. For that, we use a four-layered CNN, as illustrated in Figure 3, due to its high accuracy and low loss rate. We experimented with multiple CNN layers during our analysis, and our results show that optimal results are achieved with four convolution layers. Figure 3 shows that each layer consists of a 3 × 3 kernel, with depths of 32, 64, 128, and 256, respectively. Further, we use the Leaky-Rectified Linear Activation Unit (ReLU) to mitigate the dying ReLU impact that sends a few neurons to the inactive state [43]. Next to ReLU, a 2D max-pooling layer reduces the computation complexity by downsampling the previous layer’s output while keeping the most dominant features. This is achieved by sliding a 2 × 2 window across the CNN output and returning the maximum pixel value in its stride. We also use dropout regularization between layers with a dropout probability of 5% [35]. This reduces the number of parameters at each epoch, resulting in lower computation and a high training speed. The output of the CNN layers is flattened to a 1D vector, as depicted in the Figure 3. After passing through the dense layer, the final output will have k nodes against k features/activities we are planning to classify. Finally, these outputs are normalized and associated with each class/activity probability using the softmax function. Hence, the class with maximum probability is the predicted human activity. In our study, we combined these micro-Doppler signatures (using CNN) results with the joint estimation techniques to obtain precise results.

3.2. Skeletal Pose Estimation Technique

The skeletal pose estimation technique combines the 5D time-series point cloud data from MMW radar with the 3D joint coordinate data from the Kinect V2 sensor. Kinect provides 25 joint coordinates for training, while CNN combines this training data with time-series data to accurately predict the locations of 19 human joints, as shown in Figure 5. The flow diagram of the used method is illustrated in Figure 6.
TI’s MMW extracts the 5D time-series data to frames, including the (x, y, z) coordinates, reflection intensity, and Doppler shift, using reflected chirp signals. However, the reflected chirp signals arrive at the radar randomly due to unexpected body movements or delays. As a result, the frames do not have consistent point locations. To achieve better prediction accuracy using the CNN, the frames should have consistent data locations and shapes. Therefore, we performed data preprocessing using matrix transformation and sorting algorithms. The transformation also reshapes the data in the matrix, thus, making it an ideal input for the CNN. Furthermore, it caters for the outliers caused by scattering in the training process, because such phenomena are inevitable in real applications.
The second step involves converting these frames to 19 joint positions using the CNN, as shown in Figure 5. The proposed CNN algorithm uses a 5-channel feature map as the input. It consists of two convolution layers, one flattening layer, and two fully connected layers. Next, multiple Batch Normalization (BN) layers are used after the convolution layer and between the fully connected layers to avoid major data distribution changes.
The input data obtained by the transformation matrix are passed through convolution layers with 16 and 32 channels, respectively. Each layer is followed by a dropout layer (with probabilities of 0.3 and 0.4) to avoid high dependency on a particular neuron. The output of these layers is passed through a flattering layer, which creates an input vector for fully connected layers. The first fully connected layer consists of 512 neurons, while the last layer consists of 57 neurons of 19 joints (3D points for each point). The fully connected layers also use dropout layers to eliminate dependency on a particular neuron.
The training of the proposed CNN requires the ground-truth value. Therefore, the Kinect V2 sensor was used to measure the referencing coordinates. Generally, it is difficult to predict hand motions of people using canes or walkers. Therefore, we extracted six extra ground truth points using the Kinect system, as shown in Figure 6. For example, we estimated locations 9, 11, 13, and 15 for the right hand. The point with the highest prediction was considered the right wrist during the final estimation, so all other points were discarded.
We placed the MMW radar and Kinect V2 sensor during the experiment to capture the data to train our model. That configuration led to a spatial offset in the x-axis that was addressed during the preprocessing phase. We ensured the frame alignment by connecting both devices to the same laptop and timestamping the data frames from each device. Our proposed method tracks the (x, y, z) coordinates using the referencing/ground truth coordinates of Kinect to compare our model outputs during the real simulation, as shown in Figure 7.
Figure 7 illustrates the point clouds, ground truth values, and estimated values obtained using the proposed method. Figure 7 shows that the MMW radar extracted point clouds have sparsity, due to the limitations of radio wavelength and inherent noise, which makes it difficult to estimate the activity or pose accurately. The proposed MMW gait system accurately estimates various joint locations for different activities and is closer to ground truth values, as visualized in Figure 7.
Later, these estimations are combined with the micro-Doppler estimate to predict five different human activities, as discussed in Algorithm 1.

3.3. MMW-Based HAR Algorithms

This algorithm identifies different human activities using micro-Doppler as an input for MMW [ i   : i = 1 , 2 , 3 , 4 , 5 ] with five different walking postures. First, it generates the radar signals of inputs in the form of d = Range, v = Velocity, and θ = Angle. The range is used to locate the users, and the distance from the radar can be calculated by d :
d = f 1 F × c × T c 2 × B = f 1 F × c 2 × S
where f 1 F represents the IF frequency (time-domain signals), c   is the speed of light, the duration of each chirp signal is denoted by T c , and B represents the reflected signals.
To distinguish between multiple users, the velocity of each user is collected and denoted by ν :
ν = ω × λ 4 π × T c
where ω represent the phase difference, and λ represents the wavelength.
To find the exact position in the spatial Cartesian coordinates system, we derive the Angle of Arrival Estimation (AoA) = θ using multiple neighboring receiving antennas d i R x . This A o A = θ is derived using the relation
θ = sin 1 ( ω × λ 2 π × d i R x )
Algorithm 1: The MMW-based HAR algorithm using the CNN.
Require: MMW [ M i : i = 1 ,   2 ,   3 ,   4 ,   5 ]
Ensure: Walking Posture W p [ W p : p = 1 ,   2 ,   3 ,   4 ,   5 ]
    for M i 1 : 5  do
          read radar signal
          estimate  d R a n g e
          estimate  v V e l o c i t y
          estimate  θ A n g l e
    end for
    for  M i 1 : 5  do read MMW and Kinect V2 data
          extract  x ,   y ,   z coordinates
          extract 5D points j 19 points
          extract 3D points k 25 points
          estimate skeleton joints [ S j = j 6 ]              total 19 points
          estimate  S j + M D p S                                    M D p S : micro-Doppler Signatures
          get prediction probabilities W p [ W p : p = 1 ,   2 ,   3 ,   4 ,   5 ]
  end for
  Read W p [ W p : p = 1 ,   2 ,   3 ,   4 ,   5 ]
  Predict Walking Posture
After that, a four-layered CNN identifies different human activities using micro-Doppler Signatures M D p S as the input. The data alignment and labelling are calculated using ν and time ( t ) as
M D p S = ν t
The prediction of skeletal pose estimation is further based on the Kinect V2 sensor and MMW radar values. The Kinect V2 sensor records 25 human joints = j in a 3D coordinate system, whereas the MMW radar records 19 human joints with five coordinates x ,   y ,   z ,   D ,   I . Then, these joints are subtracted from the joints of the 3D system using the formula S j = j 6 . These six subtracted values are actually the joints of the hand recorded twice.
Finally, the walking prediction W p [ W p : p = 1 ,   2 ,   3 ,   4 ,   5 ] is done using the relation
W p = S j + M D p S
DBSCAN is applied to separate the scattered points into different categories, such as normal walking or walking with a limb.
  • W1: walking normal.
  • W2: walking with stooped posture.
  • W3: walking with limp.
  • W4: walking with a walker.
  • W5: walking with a cane.

4. Experiment Setup

Table 1 lists the configuration parameters of the TI’s IWR6843ISK-ODS MMW radar used to classify human activities into the five working classes (W1, W2, W3, W4, W5) listed above.
We proposed a clinical study that was approved by the Institutional Review Board (IRB) at the University of Dayton and recruited approximately 74 participants. The participants’ demographics are provided in Table 2. The obtained datasets were used to train our deep learning CNN model and apply it in real-time.
The testing environment is shown in Figure 8. The environment consisted of a MMW radar sensor placed on top of a tripod with a height of 2 m and rotated with a tilt angle of 15 degrees for better area coverage. Based on the Robotic Operating System (ROS) on the Ubuntu-running Nvidia Jetson nano platform, we developed an interface program to connect the TI MMWAVEICBOOST and collect the radar 3D point cloud over the USB port.

Data Collection and Labelling

The data were recorded in three scenarios, as shown in Figure 9. We asked participants to walk perpendicularly in front of the sensor for one minute. This step included walking forward and backwards. Then, we asked participants to walk parallel to the sensor for one minute, walking forward and backwards. Finally, we asked the participants to walk freely in front of the sensor for one minute. The aim was to collect more data from different angles to improve the model’s training at different angles.
Figure 9 only shows data collection for limping and stooped posture due to space limitations. We used a python script to convert the files into CSV files containing all measurement values, such as time, target_idx, x, y, z, range, velocity, doppler_bin, bearing, intensity, elevation, posX, posY, posZ, velX, velY, and velZ. The micro-Doppler data collection process is illustrated in Figure 10.
Similarly, data collection using the Kinect V2 sensor to train the model for reconstruction of the human joints is presented in Figure 11.

5. Results & Discussion

First, we performed a real-time simulation to observe the point cloud behavior for different activities and then validated our system against the ground-truth values. Second, we calculated the training and prediction accuracy levels.

5.1. Monitoring Individual Activity

We tested each participant condition/activity separately, for example, normal walking, limping, and stooped posture. We monitored the point cloud and the prediction message on the MMW system dashboard as circled by yellow color. For example, Figure 12 shows a person walking normally in the library environment. We can visualize that the generated point cloud structure is similar to the ground truth value or actual standard walking pose. Similarly, the MMW radar gait dashboard classifies it as “walking normally”, as highlighted by the yellow circle. The same trend is visible for the remaining activities illustrated in Figure 13, Figure 14, Figure 15 and Figure 16.

5.2. Monitoring Multiple Activities

In this step, we examined the detection of multiple activities for an individual. The participant was asked to perform multiple activities, such as walking normally and then limping to monitor the sequences of change in the output and the changes in the prediction message. As an example, we show the following scenarios:
  • Scenario 1: Walking normally and then limping (Figure 17).
  • Scenario 2: Limping and then stooped posture (Figure 18).

5.3. Monitoring Different Subjects with Different Activities

The proposed system has the ability to detect multiple subjects with different positions/activities at the same time. Therefore, as examples, we examined the results by considering the following scenarios:
  • Scenario 1: Two subjects, one walking with a walker and one walking with a stooped posture (Figure 19).
  • Scenario 2: Three subjects, one walking with a walker, one walking normally, and one limping (Figure 20).

5.3.1. The Accuracy Evaluation and Time Analysis

Table 3 compares the accuracy of the predicted 3D joint coordinates against the ground truth. Here, we computed the Mean Absolute Error (MAE) and Root-Mean-Squared Error (RMSE) for x, y and z coordinates of different joints, as illustrated in Table 3. We trained five different models and averaged them to eliminate the system errors. The average MAE for all 19 joints was 5.86, 2.98, and 5.49 cm for the x, y, and z axes, respectively. Similarly, the average RMSE was 8.66, 4.45, and 7.75 cm for those coordinate axes, respectively.
The results suggest that the x and z axes have slightly larger errors than the y-axis since the observed movements involved more horizontal and vertical displacement of all body parts. In contrast, the error along the y-axis was minimal (2.13–4.13 cm) due to the smaller displacement in depth. Generally, most joints’ MAE errors were smaller than 8 cm. The most notable exceptions were the right and left wrist joints. As an explanation, the joints related to the hands need a higher resolution for localization. Since the MMW radar’s range resolution is 0.084 m at 1780.393 MHz, as mentioned in Table 1, it is difficult for the model to reconstruct these points precisely.
We used the Nvidia Jetson nano, which includes a Graphical Processing Unit (128-core Maxwell) and a Central Processing Unit (Quad-core ARM A57 @ 1.43 GHz) which offers the required computational power for HAR [44]. For different configurations, the total inference time required to process all 50,400 frames ranged from 2.2 s to 3.7s. The total power consumption ranged from 3900.2 mW to 5763.4 mW. The average frame processing time ranged from 54.3 μs to 92.5 μs.

5.3.2. Prediction Accuracy

Figure 21 illustrates the training and validation accuracy during the training process. In contrast, Figure 22 visualizes the training and validation losses. Figure 21 and Figure 22 show that the model adjusted the weights to identify all the activities correctly. Furthermore, the accuracy and loss function improved with more epochs, indicating that the model can generalize and accurately predict the outcome on validation data.
Lastly, Figure 23 plots the confusion matrix for all activities. The first element shows that our proposed method predicted normal walking (W1) with an accuracy of 97.2%. In contrast, it predicted normal walking as a stooped posture 1.3% of the time. Similarly, it confused normal walking, limping, and cane and walker walking 0.6%, 0.2%, and 0.7% time, respectively.
The proposed method attained the highest prediction accuracy of 98.8% for walking with a walker, closely followed by a prediction accuracy of 98.4% for walking with a cane. However, the first three activities (W1, W2, and W3) demonstrated slightly lower prediction accuracy levels, because their gait patterns overlap sometimes. Nevertheless, the lowest accuracy of 95.7% is still clinically relevant for HAR for rehabilitation or remote monitoring purposes.

6. Conclusions

This study demonstrated the ability of the MMW radar system to accurately identify five different gait patterns. The proposed system combines pose estimation techniques with micro-Doppler signatures obtained by the low-cost radar system. The use of MMW radar for gait analysis preserves users’ privacy, does not require line of sight, can track multiple people at the same time, and can operate in varied environmental conditions. The information generated by this approach could be used to recognize other gait patterns and kinematic variables, such as joint angles.
This work has the potential to provide a number of practical clinical benefits, including the ability to track changes in the gait of one or more individuals over extended periods of time in both home and institutional settings. This would allow healthcare professionals to remotely monitor and assess the effectiveness of rehabilitation interventions outside the clinical setting and provide data that may indicate the need for additional interventions. In institutional settings, such as skilled nursing facilities, this system could provide data on the walking ability of residents to mitigate fall risk and to direct resources more effectively.

Author Contributions

Conceptualization, M.A.A. and V.P.C.; methodology, M.A.A. and A.K.A.; software, M.A.A., A.K.A. and O.A.; validation, M.A.A. and K.J.; formal analysis, M.A.A., A.K.A., K.G., M.B., S.T. and O.A.; investigation, M.A.A.; resources, K.J. and V.P.C.; data curation, M.A.A., K.G., M.B. and S.T.; writing—original draft preparation, M.A.A.; writing—review and editing, M.A.A., K.J. and V.P.C.; visualization, M.A.A., K.G., M.B., S.T. and K.J.; supervision, K.J. and V.P.C.; project administration, K.J. and V.P.C.; funding acquisition, V.P.C. All authors have read and agreed to the published version of the manuscript.

Funding

We would like to acknowledge the financial support received from the School of Engineering at University of Dayton.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of the University of Dayton (protocol code 18549055, 10 January 2022).

Informed Consent Statement

Informed consent was obtained for all participants.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

The authors acknowledge the editors and reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mshali, H.; Lemlouma, T.; Moloney, M.; Magoni, D. A Survey on Health Monitoring Systems for Health Smart Homes. Int. J. Ind. Ergon. 2018, 66, 26–56. [Google Scholar] [CrossRef] [Green Version]
  2. Guan, C.; Niu, H. Frailty Assessment in Older Adults with Chronic Obstructive Respiratory Diseases. Clin. Interv. Aging 2018, 13, 1513–1524. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Saboor, A.; Ahmad, R.; Ahmed, W.; Kiani, A.K.; Moullec, Y.L.; Alam, M.M. On Research Challenges in Hybrid Medium-Access Control Protocols for IEEE 802.15.6 WBANs. IEEE Sens. J. 2019, 19, 8543–8555. [Google Scholar] [CrossRef]
  4. Saboor, A.; Mustafa, A.; Ahmad, R.; Khan, M.A.; Haris, M.; Hameed, R. Evolution of Wireless Standards for Health Monitoring. In Proceedings of the 2019 9th Annual Information Technology, Electromechanical Engineering and Microelectronics Conference (IEMECON), Jaipur, India, 13–15 March 2019; pp. 268–272. [Google Scholar]
  5. Khan, M.A.; Saboor, A.; Kim, H.; Park, H. A Systematic Review of Location Aware Schemes in the Internet of Things. Sensors 2021, 21, 3228. [Google Scholar] [CrossRef]
  6. Agham, N.; Chaskar, U. Prevalent Approach of Learning Based Cuffless Blood Pressure Measurement System for Continuous Health-Care Monitoring. In Proceedings of the 2019 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Istanbul, Turkey, 26–28 June 2019; pp. 1–5. [Google Scholar]
  7. Baig, M.M.; Afifi, S.; GholamHosseini, H.; Mirza, F. A Systematic Review of Wearable Sensors and IoT-Based Monitoring Applications for Older Adults—A Focus on Ageing Population and Independent Living. J. Med. Syst. 2019, 43, 233. [Google Scholar] [CrossRef] [PubMed]
  8. Alanazi, M.A.; Alhazmi, A.K.; Yakopcic, C.; Chodavarapu, V.P. Machine Learning Models for Human Fall Detection Using Millimeter Wave Sensor. In Proceedings of the 2021 55th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 24 March 2021; pp. 1–5. [Google Scholar]
  9. Yu, C.; Xu, Z.; Yan, K.; Chien, Y.-R.; Fang, S.-H.; Wu, H.-C. Noninvasive Human Activity Recognition Using Millimeter-Wave Radar. IEEE Syst. J. 2022, 16, 3036–3047. [Google Scholar] [CrossRef]
  10. di Biase, L.; Di Santo, A.; Caminiti, M.L.; De Liso, A.; Shah, S.A.; Ricci, L.; Di Lazzaro, V. Gait Analysis in Parkinson’s Disease: An Overview of the Most Accurate Markers for Diagnosis and Symptoms Monitoring. Sensors 2020, 20, 3529. [Google Scholar] [CrossRef]
  11. Usmani, S.; Saboor, A.; Haris, M.; Khan, M.A.; Park, H. Latest Research Trends in Fall Detection and Prevention Using Machine Learning: A Systematic Review. Sensors 2021, 21, 5134. [Google Scholar] [CrossRef]
  12. Mekruksavanich, S.; Jitpattanakul, A. LSTM Networks Using Smartphone Data for Sensor-Based Human Activity Recognition in Smart Homes. Sensors 2021, 21, 1636. [Google Scholar] [CrossRef]
  13. Alrashdi, I.; Siddiqi, M.H.; Alhwaiti, Y.; Alruwaili, M.; Azad, M. Maximum Entropy Markov Model for Human Activity Recognition Using Depth Camera. IEEE Access 2021, 9, 160635–160645. [Google Scholar] [CrossRef]
  14. Jia, Y.; Guo, Y.; Wang, G.; Song, R.; Cui, G.; Zhong, X. Multi-Frequency and Multi-Domain Human Activity Recognition Based on SFCW Radar Using Deep Learning. Neurocomputing 2021, 444, 274–287. [Google Scholar] [CrossRef]
  15. Reddy Maddikunta, P.K.; Srivastava, G.; Reddy Gadekallu, T.; Deepa, N.; Boopathy, P. Predictive Model for Battery Life in IoT Networks. IET Intell. Transp. Syst. 2020, 14, 1388–1395. [Google Scholar] [CrossRef]
  16. van Wamelen, D.J.; Sringean, J.; Trivedi, D.; Carroll, C.B.; Schrag, A.E.; Odin, P.; Antonini, A.; Bloem, B.R.; Bhidayasiri, R.; Chaudhuri, K.R. Digital Health Technology for Non-Motor Symptoms in People with Parkinson’s Disease: Futile or Future? Parkinsonism Relat. Disord. 2021, 89, 186–194. [Google Scholar] [CrossRef] [PubMed]
  17. Sengupta, A.; Jin, F.; Cao, S. NLP Based Skeletal Pose Estimation Using MmWave Radar Point-Cloud: A Simulation Approach. In Proceedings of the 2020 IEEE Radar Conference (RadarConf20), Florence, Italy, 21 September 2020; pp. 1–6. [Google Scholar]
  18. Yang, X.; Liu, J.; Chen, Y.; Guo, X.; Xie, Y. MU-ID: Multi-User Identification Through Gaits Using Millimeter Wave Radios. In Proceedings of the IEEE INFOCOM 2020—IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020; pp. 2589–2598. [Google Scholar]
  19. Yang, Z.; Pathak, P.H.; Zeng, Y.; Liran, X.; Mohapatra, P. Monitoring Vital Signs Using Millimeter Wave. In Proceedings of the Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing, Paderborn, Germany, 5 July 2016; pp. 211–220. [Google Scholar]
  20. Cen, S.H.; Newman, P. Precise Ego-Motion Estimation with Millimeter-Wave Radar Under Diverse and Challenging Conditions. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 1–8. [Google Scholar]
  21. Wonsetler, E.C.; Bowden, M.G. A Systematic Review of Mechanisms of Gait Speed Change Post-Stroke. Part 1: Spatiotemporal Parameters and Asymmetry Ratios. Top. Stroke Rehabil. 2017, 24, 435–446. [Google Scholar] [CrossRef] [PubMed]
  22. Mulas, I.; Putzu, V.; Asoni, G.; Viale, D.; Mameli, I.; Pau, M. Clinical Assessment of Gait and Functional Mobility in Italian Healthy and Cognitively Impaired Older Persons Using Wearable Inertial Sensors. Aging Clin. Exp. Res. 2021, 33, 1853–1864. [Google Scholar] [CrossRef] [PubMed]
  23. Saboor, A.; Kask, T.; Kuusik, A.; Alam, M.M.; Le Moullec, Y.; Niazi, I.K.; Zoha, A.; Ahmad, R. Latest Research Trends in Gait Analysis Using Wearable Sensors and Machine Learning: A Systematic Review. IEEE Access 2020, 8, 167830–167864. [Google Scholar] [CrossRef]
  24. Zebin, T.; Scully, P.J.; Ozanyan, K.B. Human Activity Recognition with Inertial Sensors Using a Deep Learning Approach. In Proceedings of the 2016 IEEE SENSORS, Orlando, FL, USA, 30 October–3 November 2016; pp. 1–3. [Google Scholar]
  25. Muro-de-la-Herran, A.; Garcia-Zapirain, B.; Mendez-Zorrilla, A. Gait Analysis Methods: An Overview of Wearable and Non-Wearable Systems, Highlighting Clinical Applications. Sensors 2014, 14, 3362–3394. [Google Scholar] [CrossRef] [Green Version]
  26. Rescio, G.; Leone, A.; Siciliano, P. Supervised Machine Learning Scheme for Electromyography-Based Pre-Fall Detection System. Expert Syst. Appl. 2018, 100, 95–105. [Google Scholar] [CrossRef]
  27. Umair Bin Altaf, M.; Butko, T.; Juang, B.-H. Acoustic Gaits: Gait Analysis With Footstep Sounds. IEEE Trans. Biomed. Eng. 2015, 62, 2001–2011. [Google Scholar] [CrossRef]
  28. Chiang, T.-H.; Su, Y.-J.; Shiu, H.-R.; Tseng, Y.-C. 3D Gait Tracking by Acoustic Doppler Effects. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 3146–3149. [Google Scholar]
  29. Huitema, R.B.; Hof, A.L.; Postema, K. Ultrasonic Motion Analysis System—Measurement of Temporal and Spatial Gait Parameters. J. Biomech. 2002, 35, 837–842. [Google Scholar] [CrossRef] [Green Version]
  30. Maki, H.; Ogawa, H.; Yonezawa, Y.; Hahn, A.W.; Caldwell, W.M. A New Ultrasonic Stride Length Measuring System. Biomed. Sci. Instrum. 2012, 48, 282–287. [Google Scholar] [PubMed]
  31. Steinert, A.; Sattler, I.; Otte, K.; Röhling, H.; Mansow-Model, S.; Müller-Werdan, U. Using New Camera-Based Technologies for Gait Analysis in Older Adults in Comparison to the Established GAITRite System. Sensors 2019, 20, 125. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Yang, C.; Ugbolue, U.; Carse, B.; Stankovic, V.; Stankovic, L.; Rowe, P. Multiple Marker Tracking in a Single-Camera System for Gait Analysis. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 3128–3131. [Google Scholar]
  33. Zhao, M.; Li, T.; Alsheikh, M.A.; Tian, Y.; Zhao, H.; Torralba, A.; Katabi, D. Through-Wall Human Pose Estimation Using Radio Signals. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7356–7365. [Google Scholar]
  34. Cagliyan, B.; Gurbuz, S.Z. Micro-Doppler-Based Human Activity Classification Using the Mote-Scale BumbleBee Radar. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2135–2139. [Google Scholar] [CrossRef]
  35. Sengupta, A.; Cao, S. MmPose-NLP: A Natural Language Processing Approach to Precise Skeletal Pose Estimation Using MmWave Radars. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–12. [Google Scholar] [CrossRef] [PubMed]
  36. An, S.; Ogras, U.Y. MARS: MmWave-Based Assistive Rehabilitation System for Smart Healthcare. ACM Trans. Embed. Comput. Syst. 2021, 20, 1–22. [Google Scholar] [CrossRef]
  37. Seifert, A.-K.; Amin, M.G.; Zoubir, A.M. Toward Unobtrusive In-Home Gait Analysis Based on Radar Micro-Doppler Signatures. IEEE Trans. Biomed. Eng. 2019, 66, 2629–2640. [Google Scholar] [CrossRef] [Green Version]
  38. Sahraian, M.A.; Yadegari, S.; Azarpajouh, R.; Forughipour, M. Avascular Necrosis of the Femoral Head in Multiple Sclerosis: Report of Five Patients. Neurol. Sci. 2012, 33, 1443–1446. [Google Scholar] [CrossRef]
  39. Benatru, I.; Vaugoyeau, M.; Azulay, J.-P. Postural Disorders in Parkinson’s Disease. Neurophysiol. Clin. Neurophysiol. 2008, 38, 459–465. [Google Scholar] [CrossRef]
  40. Hobeika, C.P. Equilibrium and Balance in the Elderly. Ear. Nose. Throat J. 1999, 78, 558–566. [Google Scholar] [CrossRef] [Green Version]
  41. Instruments, T. IWR6843 Intelligent MmWave Overhead Detection Sensor (ODS) Antenna Plug-in Module. Available online: https://www.ti.com/tool/IWR6843ISK-ODS (accessed on 28 June 2022).
  42. Sander, J.; Ester, M.; Kriegel, H.-P.; Xu, X. Density-Based Clustering in Spatial Databases: The Algorithm GDBSCAN and Its Applications. Data Min. Knowl. Discov. 1998, 2, 169–194. [Google Scholar] [CrossRef]
  43. Anthimopoulos, M.; Christodoulidis, S.; Ebner, L.; Christe, A.; Mougiakakou, S. Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network. IEEE Trans. Med. Imaging 2016, 35, 1207–1216. [Google Scholar] [CrossRef] [PubMed]
  44. Jetson Nano Developer Kit. Available online: https://developer.nvidia.com/embedded/jetson-nano-developer-kit (accessed on 28 June 2022).
Figure 1. A flow diagram of the MMW radar gait system.
Figure 1. A flow diagram of the MMW radar gait system.
Sensors 22 05470 g001
Figure 2. TI IWR6843ISK-ODS MMW radar (left), and MMWAVEICBOOST evaluation board (right).
Figure 2. TI IWR6843ISK-ODS MMW radar (left), and MMWAVEICBOOST evaluation board (right).
Sensors 22 05470 g002
Figure 3. An overview of the proposed HAR detection method Using Micro-Doppler signatures.
Figure 3. An overview of the proposed HAR detection method Using Micro-Doppler signatures.
Sensors 22 05470 g003
Figure 4. Micro-Doppler signature of different activities.
Figure 4. Micro-Doppler signature of different activities.
Sensors 22 05470 g004
Figure 5. The training and estimated joint positions.
Figure 5. The training and estimated joint positions.
Sensors 22 05470 g005
Figure 6. Joint estimation using the 5D time-series point cloud.
Figure 6. Joint estimation using the 5D time-series point cloud.
Sensors 22 05470 g006
Figure 7. Proposed system for reconstructing human joints from a point cloud. (Left) (MMW radar point clouds), (middle) (Proposed estimation), (right) (Kinect ground truth).
Figure 7. Proposed system for reconstructing human joints from a point cloud. (Left) (MMW radar point clouds), (middle) (Proposed estimation), (right) (Kinect ground truth).
Sensors 22 05470 g007aSensors 22 05470 g007b
Figure 8. Experiment Setup.
Figure 8. Experiment Setup.
Sensors 22 05470 g008
Figure 9. Data collection in the laboratory environment.
Figure 9. Data collection in the laboratory environment.
Sensors 22 05470 g009
Figure 10. Micro-Doppler spectrogram produced during the data collection process for different activities. (a) walking normal, (b) walking with limp, (c) walking with stooped posture, (d) walking with a cane, (e) walking with a walker.
Figure 10. Micro-Doppler spectrogram produced during the data collection process for different activities. (a) walking normal, (b) walking with limp, (c) walking with stooped posture, (d) walking with a cane, (e) walking with a walker.
Sensors 22 05470 g010
Figure 11. Joint estimation during the data collection process for different activities.
Figure 11. Joint estimation during the data collection process for different activities.
Sensors 22 05470 g011
Figure 12. Subject walking normally.
Figure 12. Subject walking normally.
Sensors 22 05470 g012
Figure 13. Subject limping.
Figure 13. Subject limping.
Sensors 22 05470 g013
Figure 14. Subject walking with a stooped posture.
Figure 14. Subject walking with a stooped posture.
Sensors 22 05470 g014
Figure 15. Subject walking with a cane.
Figure 15. Subject walking with a cane.
Sensors 22 05470 g015
Figure 16. Subject walking with a walker.
Figure 16. Subject walking with a walker.
Sensors 22 05470 g016
Figure 17. Walking normally and then limping.
Figure 17. Walking normally and then limping.
Sensors 22 05470 g017
Figure 18. Limping and then stooped posture.
Figure 18. Limping and then stooped posture.
Sensors 22 05470 g018
Figure 19. Two subjects, one walking with a walker and one walking with a stooped posture.
Figure 19. Two subjects, one walking with a walker and one walking with a stooped posture.
Sensors 22 05470 g019
Figure 20. Three subjects, one walking with a walker, one walking normally, and one limping.
Figure 20. Three subjects, one walking with a walker, one walking normally, and one limping.
Sensors 22 05470 g020
Figure 21. The results for the training and validation data (Accuracy).
Figure 21. The results for the training and validation data (Accuracy).
Sensors 22 05470 g021
Figure 22. The results for the training and validation data (Loss Function).
Figure 22. The results for the training and validation data (Loss Function).
Sensors 22 05470 g022
Figure 23. Confusion matrix for five different human activities.
Figure 23. Confusion matrix for five different human activities.
Sensors 22 05470 g023
Table 1. IWR6843ISK-ODS Radar Parameters.
Table 1. IWR6843ISK-ODS Radar Parameters.
ParameterPhysical Description
Start Frequency60.75 GHz
Number of TX3 TX
Number of RX4 RX
Number of samples per chirp96
Number of chirps288
Maximum velocity16.2 km/h
Velocity resolution0.324 km/h
Idle time30 µs
ADC valid start time25 µs
F_s (Sampling frequency)2.950 Msps
F_c (Central frequency)63.008 GHz
Valid sweep Bandwidth (BW)1780.393 MHz
Periodicity   ( T Frame )55 ms
Max   unambiguous   range   ( R max )8.083 m
Range resolution(∆R) 0.084 m
Max   unambiguous   Doppler   ( D max )±4.450 m/s
Doppler resolution (∆D)0.093 m/s
Range Detection Threshold15 dB
Doppler Detection Threshold15 dB
Table 2. The participants’ distribution.
Table 2. The participants’ distribution.
ParameterMean ± SD (Range)
Age24 ± 7.36 (21–53)
Height (cm)170 ± 5.55 (160–185.42)
Weight (kg)75 ± 12.59 (55–115)
BMI25.47 ± 4.36 (19.26–40.75)
Gender (M/F)42/32
Table 3. Average Localization Error for 3D joint coordinates.
Table 3. Average Localization Error for 3D joint coordinates.
No. PointDescriptionX (Horizontal) (cm)Y (Depth) (cm)Z (Vertical) (cm)
MAERMSEMAERMSEMAERMSE
1Head6.359.462.593.597.109.58
2Neck5.688.712.473.236.478.87
3Spine Shoulder5.518.462.133.026.308.61
4Shoulder Left5.828.802.283.285.778.01
5Shoulder Right5.648.582.664.026.018.11
6Elbow Left6.419.123.265.017.089.61
7Elbow Right6.859.633.615.627.309.81
8Wrist Left9.2312.664.025.7112.4516.23
9Wrist Right9.6213.144.136.1413.0316.52
10Spine Mid5.057.812.022.865.717.85
11Spine Base4.557.122.453.874.876.72
12Hip Left4.547.022.453.874.726.56
13Hip Right4.457.022.564.044.826.67
14Knee Left4.467.013.074.522.143.43
15Knee Right5.107.423.274.722.504.21
16Ankle Left4.457.083.084.542.233.42
17Ankle Right5.818.323.345.091.654.26
18Foot Left5.488.253.726.022.054.12
19Foot Right6.278.873.465.422.104.69
19 points Average5.868.662.984.455.497.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alanazi, M.A.; Alhazmi, A.K.; Alsattam, O.; Gnau, K.; Brown, M.; Thiel, S.; Jackson, K.; Chodavarapu, V.P. Towards a Low-Cost Solution for Gait Analysis Using Millimeter Wave Sensor and Machine Learning. Sensors 2022, 22, 5470. https://doi.org/10.3390/s22155470

AMA Style

Alanazi MA, Alhazmi AK, Alsattam O, Gnau K, Brown M, Thiel S, Jackson K, Chodavarapu VP. Towards a Low-Cost Solution for Gait Analysis Using Millimeter Wave Sensor and Machine Learning. Sensors. 2022; 22(15):5470. https://doi.org/10.3390/s22155470

Chicago/Turabian Style

Alanazi, Mubarak A., Abdullah K. Alhazmi, Osama Alsattam, Kara Gnau, Meghan Brown, Shannon Thiel, Kurt Jackson, and Vamsy P. Chodavarapu. 2022. "Towards a Low-Cost Solution for Gait Analysis Using Millimeter Wave Sensor and Machine Learning" Sensors 22, no. 15: 5470. https://doi.org/10.3390/s22155470

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop