Next Article in Journal
Acknowledgment to the Reviewers of Technologies in 2022
Next Article in Special Issue
Towards Safe Visual Navigation of a Wheelchair Using Landmark Detection
Previous Article in Journal
Recent Advances in Si-Compatible Nanostructured Photodetectors
Previous Article in Special Issue
Determination of “Neutral”–“Pain”, “Neutral”–“Pleasure”, and “Pleasure”–“Pain” Affective State Distances by Using AI Image Analysis of Facial Expressions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Assessment of Cognitive Fatigue from Gait Cycle Analysis

Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Technologies 2023, 11(1), 18; https://doi.org/10.3390/technologies11010018
Submission received: 6 December 2022 / Revised: 13 January 2023 / Accepted: 21 January 2023 / Published: 26 January 2023
(This article belongs to the Collection Selected Papers from the PETRA Conference Series)

Abstract

:
Cognitive Fatigue (CF) is the decline in cognitive abilities due to prolonged exposure to mentally demanding tasks. In this paper, we used gait cycle analysis, a biometric method related to human locomotion to identify cognitive fatigue in individuals. The proposed system in this paper takes two asynchronous videos of the gait of individuals to classify if they are cognitively fatigued or not. We leverage the pose estimation library OpenPose, to extract the body keypoints from the frames in the videos. To capture the spatial and temporal information of the gait cycle, a CNN-based model is used in the system to extract the embedded features which are then used to classify the cognitive fatigue level of individuals. To train and test the model, a gait dataset is built from 21 participants by collecting walking data before and after inducing cognitive fatigue using clinically used games. The proposed model can classify cognitive fatigue from the gait data of an individual with an accuracy of 81%.

1. Introduction

Cognitive Fatigue (CF) is a unique kind of exhaustion that is brought on by prolonged cognitive activity that strains people’s mental faculties. [1]. Sustained cognitive fatigue leads to poor performance at work, causes workplace accidents, reduced test scores in students, etc. [2,3]. CF is one of the main reasons of day to day accidents too. For example, driving while cognitively fatigued is one of the main reasons for road accidents and a staggering 41% of the population has reported driving while in a fatigued state at some point in their life [4]. Another example is, 68% of all aviation accidents were due to human error enabled by some form of cognitive fatigue [5]. Another study found that surgical residents were cognitively fatigued due to lack of sleep 48% of their awake time, which had an impact on their performance [6]. Hence it is important to identify cognitive fatigue in a person to avert disasters that may arise as a result of a decrease in cognitive ability. In this paper, we propose a method to assess cognitive fatigue from gait data captured using multiple RGB cameras.
Gait is a person’s pattern of walking. The Gait cycle or stride is the sequence of steps from when one foot contacts the ground to that same foot contacting the ground again. Figure 1 shows a gait cycle of an individual. The gait sequence consists of multiple such gait cycles. Various issues cause disturbance in the gait cycle of an individual such as physical fatigue, cognitive fatigue, heightened emotion, and diseases such as multiple sclerosis, stroke, etc. [7,8,9]. Existing methods of CF detection relies on electroencephalogram (EEG), Functional Magnetic Resonance Imaging (fMRI) data, or eye-motion [10,11,12]. These approaches often require specialized sensors such as an EEG headset, a specialized machine to capture fMRI images, or a close view of the facial features. These systems may not be suitable for day-to-day use as they are not robust and uncomfortable to wear. On the other hand, gait analysis can be done from a great distance and without the cooperation or intrusion to the subject using images or videos captured using simple RGB cameras.
In this paper, we discuss the methodologies and experimental setup of collecting gait data of individuals before and after inducing cognitive fatigue using clinically proven games. We use our proposed setup to collect data from 21 individuals which is then used to train and validate the result of the proposed deep learning-based model to identify cognitive fatigue.
Contributions of this paper are:
  • Computer vision-based system that uses gait sequence analysis to identify an individuals cognitive fatigue state.
  • A dataset of gait sequences of individuals in non-cognitively fatigued and cognitively fatigued states.
  • A 1D-CNN model based solution to classify cognitive fatigue in individuals.
The rest of the paper is organized as follows: Section 2 discusses related work. Section 3 introduces the data collection setup, collection methodologies, and annotations used on the collected data. Section 4 discusses the proposed solution. Section 5 discusses result analysis followed by conclusions in Section 6.

2. Related Work

There is various literature to identify or measure both physical and cognitive fatigue. As there is no defined scale to measure fatigue, each study aims at detecting a specific type of fatigue usually defined by the authors. The manifestation of physical fatigue is visible in the human body. One common example is the elevated heart rate which can be used to measure physical fatigue [14]. The elevated heart rate is also an indicator of cognitive stress [15]. Inertial Measurement Unit (IMU) and electroencephalogram (EEG) are used in this study to measure fatigue and alert the workers to reduce workplace injury and accidents [16]. This study [17] uses IMU sensors to detect fatigue induced by running. The aforementioned approaches depend on sensors that need to be attached to the subject’s body to measure physical or cognitive fatigue. Although wearable heart rate monitor sensors and IMU sensors are not expensive, non-intrusive and contact-free techniques are preferable in real-world use case scenarios.
The physical and emotional state of an individual can be inferred from gait analysis. This study conducted by the authors aims to identify the identity and emotion of an individual from gait analysis [18]. A similar study also aims to identify the emotion of an individual from their gait pattern [19]. Manifestation of physical and cognitive fatigue has been observed to change the gait pattern of individuals. There have been studies supporting an increase in step width during walking in older and younger adults, head stability, and postural control impairment when balancing on one foot due to physical fatigue has been published [8,20,21,22]. Similar studies have also reported disturbance of gait pattern of older adults due to cognitive fatigue [23].
Pose estimation models such as OpenPose, HRNet, MediaPipe, etc. have opened up new directions for activity recognition research such as gait analysis [24,25]. These models extract the skeletal joint or body keypoints of the human body which are used as features by deep learning models as inputs to achieve high performance and accuracy in classification tasks. In this study, the authors leverage an RNN and CNN-based multi-modal model to classify abnormal gait using body keypoint sequence and average foot pressure data [26]. In another similar study, the authors use extracted body keypoints to identify the emotion of individuals using a model that leverages group convolution [27]. The authors proposed an Attention Enhanced Temporal Graph Convolutional Network (AT-GCN) in this study for identity and emotion recognition [18]. The aforementioned multi-learning model can effectively capture discriminative spatiotemporal gait features and provides higher accuracy. Graph Convolutional Network (GCN) model was used in this study for gait recognition [28]. GCN considers both the spatial and temporal aspects of gait and achieves state-of-the-art accuracy in the CASIA-B public gait dataset. In a recent study, the authors classified physically fatigued and non-fatigued gait cycles via a multi-task RNN [7]. In the aforementioned study, the proposed model has one primary branch that does the task of fatigue classification while the auxilary branch identifies the first supporting foot in the gait cycles.
CNNs have been widely used in gait recognition. In this study, the authors use a deep convolutional neural network to classify gait which they test on the CASIA-B dataset [29]. The proposed CNN network works on a small dataset and does not require augmentation of the data. The input used for the network is the Gait Energy Image (GEI) representation proposed by this study [30]. In this paper, the authors use a wearable inertial measurement unit (IMU) to obtain data related to gait patterns and use an LSTM-CNN fusion model to classify abnormal gait patterns from obtained data [31]. On the other hand, encoding a 3D skeleton sequence into an image-like representation allows convolutional neural networks that can achieve high performance in image recognition to be employed [32]. Inspired by these works, we adopted a 1D-CNN model-based solution to apply to body keypoints extracted from the RGB image sequences of gait data to classify cognitive fatigue of individuals.

3. Experimental Setup, Dataset Collection and Annotation

3.1. Experimental Setup

In this section, we discuss the experimental setup and the data collection procedure. Figure 2 illustrates the steps taken during the data collection procedure. For each participant, the procedure of the data collection steps is as follows:
  • Fill out an initial survey with the participants’ data and initial Cognitive Fatigue(CF) level.
  • Collect walking (gait) data.
  • Play multiple rounds of the 2-Back game. Fill out a survey mentioning CF level.
  • Play multiple rounds of a VR game and fill out a survey mentioning CF level.
  • Collect walking (gait) data and survey with CF level.
Figure 3 shows an overview of the experimental setup used for the data collection (it is denoted as the Gait block in Figure 2). A 3.5 m long marked region is used for the gait data collection. The participant starts walking from the gray region marked as region A and walks all the way to region B. Once the participant reaches region B, they take a 180-degree turn and walk back to region A. The two cameras are used to capture the video of the gait pattern of the user in the blue region. Camera 1 captures the side view of the participant and Camera 2 captures the front view (while the participant walks toward region B) and back view (while the participant takes a turn and walks back to region A) of the participants. The videos are captured at 60 FPS along with the depth. Though the depth information is not used in this study.
Our goal was to investigate the change in gait pattern due to cognitive fatigue. Multiple rounds of VR games and N-Back games were used to induce cognitive fatigue in the participants. The N-back game is a sequential cognitive test that assesses a person’s ability to store, change, and manipulate information in short-term memory. In an N-back game, participants must decide whether each item displayed on the screen is the same as the stimulus that was displayed N-steps earlier. N-Back game has been used by clinical researchers to induce cognitive fatigue [33]. 2-back game, a variation of the N-back game where the participant has to respond to stimuli displayed 2-steps earlier is used in our study to induce cognitive fatigue in participants. The participants also played Beat Saber, a popular VR game where the participant has to slice oncoming blocks in the direction shown on the block to the beat of the music. Figure 4 shows a participant playing Beat Saber during the data collection process. The VR games also induce visual and cognitive fatigue in users [34]. As it is difficult to assess or measure CF objectively, we ask the participants for their CF levels. The CF level in the survey is asked using the visual analog scale (VAS) [35]. The participants responded to the VAS questionnaires a total of four times. The CF level in the VAS is ranged from 1 to 10 where 1 means the least cognitively fatigued and 10 means most cognitively fatigued.
For this study, our aim was to collect gait data of individuals before and after inducing cognitive fatigue. We collected the data in two sessions from each participant on different days and times of the day. If a participant attended a morning session for data collection, the next session for that participant was held in the afternoon on another day. This was done to reduce the bias in collected data as people feel different levels of fatigue throughout the day. Some participants may feel fatigued during the morning but feel energetic during the afternoon and vice versa.

3.2. Data Collection and Annotation

We collected and annotated a novel dataset from 21 subjects. Among the 21 participants of the study, 16 were male and 5 were female with an average age of 23.75 years. The participants’ height ranged from 154.94 centimeters to 187.96 centimeters. Even though there was a significant variance in the height of the participants, the participants had similar body types. We did not consider the physical fitness of the participants, but excluded participants with health conditions such as Arthritis, Spinal Cord Injury(SCI), muscle disorders, etc. which are known to cause a disturbance in a person’s gait pattern. The variance in the collected data ensures that the model does not learn the aforementioned features when training using our dataset. The data were collected from the participants with informed consent. Figure 2 shows the flow the of the data collection steps.
During the data collection phase, we obtained 4 different RGB videos for each session. These videos contained the gait sequence of the participants from front and side views for the initial and final trials in the session. The collected videos were trimmed to get rid of parts of the beginning and end of the video clips where the participants were stationary. Due to the angle of view of the cameras, there were unnecessary objects in the video frames. The collected videos were cropped to get rid of these unnecessary objects and only keep the participant in view. The videos were then run through OpenPose to obtain the body key points of the participants in every frame [24]. For each video, we obtained roughly 350 to 400 frames depending on the duration of an individual’s trial. Due to lighting conditions and the presence of objects in the background, OpenPose sometimes detected incorrect body key points. As a result, we checked each frame individually for anomalies and the anomalous frames were discarded.
After each session, we obtained the CF level of the participants through a VAS on a scale of 1 to 10 where 1 was the least cognitively fatigued and 10 was the most cognitively fatigued. Based on the participant’s feedback, we labeled the gait sequence as either cognitively fatigued or not fatigued. We labeled the gait sequence as cognitively fatigued if the reported score of CF was 6 or higher. Otherwise, we labeled the gait sequence as not cognitively fatigued. We augmented and used this labeled dataset to train our deep neural network model.

4. Problem Formulation and Methods

4.1. Problem Statement

Gait analysis and classification are different from typical action recognition discussed in Section 2 in a way that difference in variation of a single action (ex: walking) is identified by the model. The cognitively fatigued and non-cognitively fatigued sample of the same subject is extremely subtle and may not be visible in obtained videos. On the other hand, the walking or gait patterns among different subjects have an extremely high variation. This subtle intra-class variation and massive inter-class variation make gait classification a difficult problem for classifiers as the massive inter-class difference overwhelms the subtle inter-class variation [7]. The gait sequence can also be thought of as a temporal sequence of body keypoints. Figure 5 shows a single stride of a participant plotted using the extracted body keypoints using OpenPose. When a person walks, they repeat this stride which makes gait a periodic phenomenon. The gait of a person contains both spatial and temporal information. If we directly feed the extracted body keypoints through networks like RNN or LSTM, they will discard the spatial information contained in the human pose. As a result, the proposed model should be able to take the spatial information i.e., position of limbs, their relative distance to each other, etc.) from the body keypoints and the temporal information (i.e., the difference between subsequent frames of body keypoints) into consideration.

4.2. Proposed Method

Convolutional neural networks provide state-of-the-art results without the need for any domain-specific feature engineering. Our proposed solution uses 1D-CNN-based models to extract the spatial and temporal features from the body keypoints of the gait sequence of individuals captured from front and side view. These extracted features of both views are then passed through a fully connected layer to classify the sequence. As the two 1D-CNN blocks extract features from the input streams independently, our proposed system works even if the key body point sequence of both views are not synchronized. Figure 6 shows an overview of the proposed system.
As mentioned in Section 3.2, we obtain 4 gait sequences or 2 data samples from each participant per session. Among these 2 samples, one sample (front and side view) represents the non-cognitively fatigued instance and the other sample (front and side view) represents the cognitively fatigued instance of a participant. We obtain a total of 70 samples from the participants. As we collect samples with both labels from each participant, our dataset is balanced. To prevent label leakage, it is ensured that the sample collected from a single individual does not end up in both the training and testing set. For consistency, we take the first 300 frames of each video to train our models where the extracted feature of each frame has a shape 25X3. To train the models shown in Table 1 (except our proposed model), we concatenate the frames of the front view and side view and flatten the extracted features which gives us a tensor of shape 600X75. When training our proposed model, we pass the frames obtained from each video through the two 1D-CNN blocks shown in Figure 6. Both 1D-CNN blocks have similar architectures. Figure 7 shows the network architecture of the 1D-CNN model used in block 1. The block 1 model takes an input of shape 300X75 which represents the front view of the gait of a participant. The block 2 model takes an input of the same size as block 1 which represents the side view of the participant’s gait in the same session. The extracted features from the 1d-CNN models are then passed through two fully connected layers to classify the participant’s cognitive fatigue state.
We extract the body keypoints from each frame of the captured gait videos as the most basic feature. We use OpenPose to extract the 25 body keypoints (e.g., ankle, knee, hands, etc.) from an image [24]. OpenPose is a 2D pose estimator, which means each extracted key point has the associated x and y coordinate of the joint and the confidence score of the detected point. Extracted body keypoints are in image coordinates.
Our collected dataset from 21 participants is not large enough to train deep learning models, which requires a massive amount of data for the training procedure. As a result, we had to perform augmentation on the extracted body keypoints. As our extracted body keypoints were in image coordinate, we applied popular image augmentation techniques on the extracted body key point sequences. We performed mirroring along the vertical and horizontal axis, cropped, and rotated the body keypoints using random values. When a transformation for augmentation is performed on a sequence, it is applied to all the key body point frames of the sequence to preserve the spatial and temporal relationship in the augmented sequence.

5. Results

The experiments are performed in a system with an intel core i7-8750 quad-core CPU, 16GB of RAM, and NVIDIA GTX 1060 GPU with 2560 Cuda cores and 8GB of graphics memory. The system was used to extract the body key points from the videos using OpenPose [24]. As there is no official protocol for training and testing split, we took the gait data collected from the first 15 participants for training the model and the remaining 6 participants’ gait data to test and evaluate the model. Similarly, the augmented data from the first 15 participants were also used for training and the augmented data of the remaining 6 participants were used for testing and validation. The models were trained for 5 such aforementioned splits of the dataset and the accuracy of each of the 5 iterations is averaged and reported in this section.
Our proposed model was implemented using the tensor flow framework [36] for 200 epochs with the ADAM optimizer [37]. The learning rate and batch sizes were 0.001 and 8 respectively. These values were obtained empirically through extensive experimentation. Both 1D-CNN modules in the models are trained using the same hyperparameters. Other models used for comparison were first trained using the default hyper-parameter values as our proposed model and later tuned to obtain better accuracy. Table 1 shows the prediction accuracy for different models trained using our dataset.
As per our knowledge, this is the first of its kind study to classify cognitive fatigue from gait sequence. As a result, we compared the accuracy of our proposed models with deep neural networks such as 1D-CNN, RNN, and LSTM which are used to classify spatial or temporal data. Even though both RNN and LSTM are good at extracting features from temporal data, the gait sequence has both temporal and spatial components. Moreover, the two video-streams that contain the gait videos of individuals are asynchronous. As a result, LSTM and RNN performs poorly on our dataset. Our proposed model outperforms these models with an average accuracy of 81.64%.

6. Conclusions and Future Work

In this paper, we presented a setup to collect gait data before and after inducing cognitive fatigue using clinically proven N-Back game and VR game. We also present a novel 1D-CNN-based model to classify cognitive fatigue from whole body human gait sequence from the collected data using our proposed setup. The experimental results show that our model performed well despite having a small dataset. This study also paves the ground for cognitive fatigue detection from gait captured using an RGB camera. In the future, our aim is to come up with new architecture to improve the accuracy of the prediction model and define an objective measure of cognitive fatigue instead of using a subjective measure such as the VAS.

Author Contributions

Conceptualization, H.R.P. and E.K.; Methodology, H.R.P. and E.K.; Software, H.R.P., E.K., A.J. and S.A.; Validation, H.R.P. and A.J.; Formal analysis, E.K.; Investigation, E.K. and G.N.; Resources, S.A. and G.N.; Data curation, H.R.P., E.K., A.J., S.A. and G.N.; Writing—original draft, H.R.P.; Writing—review & editing, H.R.P., E.K., M.T. and F.M.; Visualization, H.R.P.; Supervision, M.T. and F.M.; Project administration, M.T. and F.M.; Funding acquisition, F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by National Science Foundation grants 1565328 and 2226164. This material is based upon work by the authors. Any opinions, findings, conclusions, and recommendations expressed in this paper are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).

Institutional Review Board Statement

Participants were recruited from communities at the local University through class announcements. Participants provided informed consent in accordance with procedures approved by the University IRB.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are not publicly available due to legal and privacy reasons.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mullette-Gillman, O.A.; Leong, R.L.; Kurnianingsih, Y.A. Cognitive fatigue destabilizes economic decision making preferences and strategies. PLoS ONE 2015, 10, e0132022. [Google Scholar] [CrossRef] [PubMed]
  2. Sievertsen, H.H.; Gino, F.; Piovesan, M. Cognitive fatigue influences students’ performance on standardized tests. Proc. Natl. Acad. Sci. USA 2016, 113, 2621–2624. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Gilsoul, J.; Libertiaux, V.; Collette, F. Cognitive fatigue in young, middle-aged, and older: Breaks as a way to recover. Appl. Psychol. 2022, 71, 1565–1597. [Google Scholar] [CrossRef]
  4. Higgins, J.S.; Michael, J.; Austin, R.; Åkerstedt, T.; Van Dongen, H.; Watson, N.; Czeisler, C.; Pack, A.I.; Rosekind, M.R. Asleep at the wheel—The road to addressing drowsy driving. Sleep 2017, 40. [Google Scholar] [CrossRef] [Green Version]
  5. Dinges, D.F. An overview of sleepiness and accidents. J. Sleep Res. 1995, 4, 4–14. [Google Scholar] [CrossRef]
  6. McCormick, F.; Kadzielski, J.; Landrigan, C.P.; Evans, B.; Herndon, J.H.; Rubash, H.E. Surgeon fatigue: A prospective analysis of the incidence, risk, and intervals of predicted fatigue-related impairment in residents. Arch. Surg. 2012, 147, 430–435. [Google Scholar] [CrossRef] [Green Version]
  7. Aoki, K.; Nishikawa, H.; Makihara, Y.; Muramatsu, D.; Takemura, N.; Yagi, Y. Physical Fatigue Detection From Gait Cycles via a Multi-Task Recurrent Neural Network. IEEE Access 2021, 9, 127565–127575. [Google Scholar] [CrossRef]
  8. Helbostad, J.L.; Leirfall, S.; Moe-Nilssen, R.; Sletvold, O. Physical fatigue affects gait characteristics in older persons. J. Gerontol. Ser. Biol. Sci. Med Sci. 2007, 62, 1010–1015. [Google Scholar] [CrossRef]
  9. Socie, M.J.; Sosnoff, J.J. Gait variability and multiple sclerosis. Mult. Scler. Int. 2013, 2013, 645197. [Google Scholar] [CrossRef] [Green Version]
  10. Sengupta, A.; Tiwari, A.; Routray, A. Analysis of cognitive fatigue using EEG parameters. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Jeju, Korea, 11–15 July 2017; pp. 2554–2557. [Google Scholar]
  11. Zadeh, M.Z.; Babu, A.R.; Lim, J.B.; Kyrarini, M.; Wylie, G.; Makedon, F. Towards cognitive fatigue detection from functional magnetic resonance imaging data. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 30 June–3 July 2020; pp. 1–2. [Google Scholar]
  12. Sikander, G.; Anwar, S. Driver fatigue detection systems: A review. IEEE Trans. Intell. Transp. Syst. 2018, 20, 2339–2352. [Google Scholar] [CrossRef]
  13. Stöckel, T.; Jacksteit, R.; Behrens, M.; Skripitz, R.; Bader, R.; Mau-Moeller, A. The mental representation of the human gait in young and older adults. Front. Psychol. 2015, 6, 943. [Google Scholar] [CrossRef] [Green Version]
  14. Patel, M.; Lal, S.K.; Kavanagh, D.; Rossiter, P. Applying neural network analysis on heart rate variability data to assess driver fatigue. Expert Syst. Appl. 2011, 38, 7235–7242. [Google Scholar] [CrossRef]
  15. Hjortskov, N.; Rissén, D.; Blangsted, A.K.; Fallentin, N.; Lundberg, U.; Søgaard, K. The effect of mental stress on heart rate variability and blood pressure during computer work. Eur. J. Appl. Physiol. 2004, 92, 84–89. [Google Scholar] [CrossRef]
  16. Li, P.; Meziane, R.; Otis, M.J.D.; Ezzaidi, H.; Cardou, P. A Smart Safety Helmet using IMU and EEG sensors for worker fatigue detection. In Proceedings of the 2014 IEEE International Symposium on Robotic and Sensors Environments (ROSE) Proceedings, Timisoara, Romania, 16–18 October 2014; pp. 55–60. [Google Scholar] [CrossRef]
  17. Marotta, L.; Buurke, J.H.; van Beijnum, B.J.F.; Reenalda, J. Towards machine learning-based detection of running-induced fatigue in real-world scenarios: Evaluation of IMU sensor configurations to reduce intrusiveness. Sensors 2021, 21, 3451. [Google Scholar] [CrossRef]
  18. Sheng, W.; Li, X. Multi-task learning for gait-based identity recognition and emotion recognition using attention enhanced temporal graph convolutional network. Pattern Recognit. 2021, 114, 107868. [Google Scholar] [CrossRef]
  19. Bhattacharya, U.; Mittal, T.; Chandra, R.; Randhavane, T.; Bera, A.; Manocha, D. Step: Spatial temporal graph convolutional networks for emotion perception from gaits. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 1342–1350. [Google Scholar]
  20. Gribble, P.A.; Hertel, J. Effect of lower-extremity muscle fatigue on postural control. Arch. Phys. Med. Rehabil. 2004, 85, 589–592. [Google Scholar] [CrossRef]
  21. Kavanagh, J.J.; Morrison, S.; Barrett, R.S. Lumbar and cervical erector spinae fatigue elicit compensatory postural responses to assist in maintaining head stability during walking. J. Appl. Physiol. 2006, 101, 1118–1126. [Google Scholar] [CrossRef]
  22. Barbieri, F.A.; dos Santos, P.C.R.; Vitório, R.; van Dieën, J.H.; Gobbi, L.T.B. Effect of muscle fatigue and physical activity level in motor control of the gait of young adults. Gait Posture 2013, 38, 702–707. [Google Scholar] [CrossRef] [Green Version]
  23. Grobe, S.; Kakar, R.S.; Smith, M.L.; Mehta, R.; Baghurst, T.; Boolani, A. Impact of cognitive fatigue on gait and sway among older adults: A literature review. Prev. Med. Rep. 2017, 6, 88–93. [Google Scholar] [CrossRef]
  24. Cao, Z.; Hidalgo Martinez, G.; Simon, T.; Wei, S.; Sheikh, Y.A. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 172–186. [Google Scholar] [CrossRef] [Green Version]
  25. Huang, J.; Zhu, Z.; Huang, G. Multi-stage HRNet: Multiple stage high-resolution network for human pose estimation. arXiv 2019, arXiv:1910.05901. [Google Scholar]
  26. Jun, K.; Lee, S.; Lee, D.W.; Kim, M.S. Deep Learning-Based Multimodal Abnormal Gait Classification Using a 3D Skeleton and Plantar Foot Pressure. IEEE Access 2021, 9, 161576–161589. [Google Scholar] [CrossRef]
  27. Narayanan, V.; Manoghar, B.M.; Sashank Dorbala, V.; Manocha, D.; Bera, A. ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 8200–8207. [Google Scholar] [CrossRef]
  28. Teepe, T.; Khan, A.; Gilg, J.; Herzog, F.; Hörmann, S.; Rigoll, G. Gaitgraph: Graph Convolutional Network for Skeleton-Based Gait Recognition. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 2314–2318. [Google Scholar] [CrossRef]
  29. Alotaibi, M.; Mahmood, A. Improved gait recognition based on specialized deep convolutional neural network. Comput. Vis. Image Underst. 2017, 164, 103–110. [Google Scholar] [CrossRef]
  30. Han, J.; Bhanu, B. Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 28, 316–322. [Google Scholar] [CrossRef]
  31. Gao, J.; Gu, P.; Ren, Q.; Zhang, J.; Song, X. Abnormal gait recognition algorithm based on LSTM-CNN fusion network. IEEE Access 2019, 7, 163180–163190. [Google Scholar] [CrossRef]
  32. Ke, Q.; Bennamoun, M.; An, S.; Sohel, F.; Boussaid, F. A new representation of skeleton sequences for 3d action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3288–3297. [Google Scholar]
  33. Bailey, A.; Channon, S.; Beaumont, J. The relationship between subjective fatigue and cognitive fatigue in advanced multiple sclerosis. Mult. Scler. J. 2007, 13, 73–80. [Google Scholar] [CrossRef]
  34. Iskander, J.; Hossny, M.; Nahavandi, S. A Review on Ocular Biomechanic Models for Assessing Visual Fatigue in Virtual Reality. IEEE Access 2018, 6, 19345–19361. [Google Scholar] [CrossRef]
  35. Wewers, M.E.; Lowe, N.K. A critical review of visual analogue scales in the measurement of clinical phenomena. Res. Nurs. Health 1990, 13, 227–236. [Google Scholar] [CrossRef]
  36. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: tensorflow.org (accessed on 15 October 2022).
  37. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. The steps of a human gait cycle [13].
Figure 1. The steps of a human gait cycle [13].
Technologies 11 00018 g001
Figure 2. Flow of the data collection steps.
Figure 2. Flow of the data collection steps.
Technologies 11 00018 g002
Figure 3. A figure showing the experimental setup.
Figure 3. A figure showing the experimental setup.
Technologies 11 00018 g003
Figure 4. A figure showing a participant playing Beat Saber, the VR game used in the data collection step to induce cognitive fatigue.
Figure 4. A figure showing a participant playing Beat Saber, the VR game used in the data collection step to induce cognitive fatigue.
Technologies 11 00018 g004
Figure 5. Visualization of a stride using the extracted body keypoints from a video using OpenPose.
Figure 5. Visualization of a stride using the extracted body keypoints from a video using OpenPose.
Technologies 11 00018 g005
Figure 6. Overview of the proposed system.
Figure 6. Overview of the proposed system.
Technologies 11 00018 g006
Figure 7. Architecture of the 1D-CNN block used. The same architecture is used in both blocks 1 and 2 shown in Figure 6.
Figure 7. Architecture of the 1D-CNN block used. The same architecture is used in both blocks 1 and 2 shown in Figure 6.
Technologies 11 00018 g007
Table 1. Results comparison.
Table 1. Results comparison.
MethodOverall Accuracy
Multi-Layer Perceptrons54.81%
Long Short-Term Memory (LSTM)58.24%
Recurrent Neural Network (RNN)63.1%
1D-CNN67.5%
Proposed Method81.64%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pavel, H.R.; Karim, E.; Jaiswal, A.; Acharya, S.; Nale, G.; Theofanidis, M.; Makedon, F. Assessment of Cognitive Fatigue from Gait Cycle Analysis. Technologies 2023, 11, 18. https://doi.org/10.3390/technologies11010018

AMA Style

Pavel HR, Karim E, Jaiswal A, Acharya S, Nale G, Theofanidis M, Makedon F. Assessment of Cognitive Fatigue from Gait Cycle Analysis. Technologies. 2023; 11(1):18. https://doi.org/10.3390/technologies11010018

Chicago/Turabian Style

Pavel, Hamza Reza, Enamul Karim, Ashish Jaiswal, Sneh Acharya, Gaurav Nale, Michail Theofanidis, and Fillia Makedon. 2023. "Assessment of Cognitive Fatigue from Gait Cycle Analysis" Technologies 11, no. 1: 18. https://doi.org/10.3390/technologies11010018

APA Style

Pavel, H. R., Karim, E., Jaiswal, A., Acharya, S., Nale, G., Theofanidis, M., & Makedon, F. (2023). Assessment of Cognitive Fatigue from Gait Cycle Analysis. Technologies, 11(1), 18. https://doi.org/10.3390/technologies11010018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop