sensors-logo

Journal Browser

Journal Browser

Intelligent Sensors for Human Motion Analysis

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 March 2022) | Viewed by 69646

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electrical and Computer Engineering, Rzeszow University of Technology, al. Powstańców Warszawy 12, 35-959 Rzeszow, Poland
Interests: human motion tracking; human body pose estimation; particle swarm optimization; parallel and distributed computing; gait recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Graphics, Vision and Digital Systems, Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, 44-100 Gliwice, Poland
Interests: processing and classification of motion capture data; time series analysis; machine learning; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Computer Science, University of Rzeszow, 1 Pigonia Str., 35-310 Rzeszow, Poland
Interests: fall detection; human motion tracking; action recognition

Special Issue Information

Dear Colleagues,

Current visual analysis of human motion is one of the most interesting and active research topics in computer vision. This great interest is due to the wide spectrum of promising applications in many areas such as surveillance systems, medicine, athletic performance analysis, human–computer interaction, virtual reality, etc. Human motion analysis concerns the detection, tracking, and recognition of people and their activities based on data recorded by various types of sensors. In these studies, RGB and depth cameras are used. Moreover, studies aimed at developing methods for gait and action recognition often use motion capture systems based on active or passive markers as well as IMU sensors. These systems are very challenging to develop and, at the same time, have great promise for addressing research problems, especially if only visual data are used. Therefore, we welcome the submission of high-quality publications from researchers working on human pose estimation and tracking in addition to related topics such as activity recognition, gait recognition, and human–computer interaction, to name but a few examples. More precisely, the relevant topics for this Special Issue include (but are not limited to):

  • Human pose estimation
  • Articulated pose tracking
  • Multi-person 3D pose estimation
  • Action recognition
  • Gait recognition
  • Gesture recognition
  • Human fall detection
  • Pose/shape modeling and rendering
  • Future 3D pose prediction
  • Human–computer interaction
  • Synthetic data and data annotation for 3D human pose
  • Application of human motion analysis methods (e.g., robotics, surveillance, medicine).

Dr. Tomasz Krzeszowski
Dr. Adam Świtoński
Dr. Michal Kepski
Prof. Dr. Carlos Tavares Calafate
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human pose estimation
  • articulated pose tracking
  • human motion tracking
  • action recognition
  • gait recognition
  • gesture recognition
  • human fall detection
  • human–computer interaction
  • markerless motion capture
  • marker-based motion capture

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 177 KiB  
Editorial
Intelligent Sensors for Human Motion Analysis
by Tomasz Krzeszowski, Adam Switonski, Michal Kepski and Carlos T. Calafate
Sensors 2022, 22(13), 4952; https://doi.org/10.3390/s22134952 - 30 Jun 2022
Viewed by 1467
Abstract
Currently, the analysis of human motion is one of the most interesting and active research topics in computer science, especially in computer vision [...] Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)

Research

Jump to: Editorial, Review

18 pages, 3209 KiB  
Article
Top-Down System for Multi-Person 3D Absolute Pose Estimation from Monocular Videos
by Amal El Kaid, Denis Brazey, Vincent Barra and Karim Baïna
Sensors 2022, 22(11), 4109; https://doi.org/10.3390/s22114109 - 28 May 2022
Cited by 8 | Viewed by 2829
Abstract
Two-dimensional (2D) multi-person pose estimation and three-dimensional (3D) root-relative pose estimation from a monocular RGB camera have made significant progress recently. Yet, real-world applications require depth estimations and the ability to determine the distances between people in a scene. Therefore, it is necessary [...] Read more.
Two-dimensional (2D) multi-person pose estimation and three-dimensional (3D) root-relative pose estimation from a monocular RGB camera have made significant progress recently. Yet, real-world applications require depth estimations and the ability to determine the distances between people in a scene. Therefore, it is necessary to recover the 3D absolute poses of several people. However, this is still a challenge when using cameras from single points of view. Furthermore, the previously proposed systems typically required a significant amount of resources and memory. To overcome these restrictions, we herein propose a real-time framework for multi-person 3D absolute pose estimation from a monocular camera, which integrates a human detector, a 2D pose estimator, a 3D root-relative pose reconstructor, and a root depth estimator in a top-down manner. The proposed system, called Root-GAST-Net, is based on modified versions of GAST-Net and RootNet networks. The efficiency of the proposed Root-GAST-Net system is demonstrated through quantitative and qualitative evaluations on two benchmark datasets, Human3.6M and MuPoTS-3D. On all evaluated metrics, our experimental results on the MuPoTS-3D dataset outperform the current state-of-the-art by a significant margin, and can run in real-time at 15 fps on the Nvidia GeForce GTX 1080. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

29 pages, 3292 KiB  
Article
Detection and Classification of Artifact Distortions in Optical Motion Capture Sequences
by Przemysław Skurowski and Magdalena Pawlyta
Sensors 2022, 22(11), 4076; https://doi.org/10.3390/s22114076 - 27 May 2022
Cited by 3 | Viewed by 1746
Abstract
Optical motion capture systems are prone to errors connected to marker recognition (e.g., occlusion, leaving the scene, or mislabeling). These errors are then corrected in the software, but the process is not perfect, resulting in artifact distortions. In this article, we examine four [...] Read more.
Optical motion capture systems are prone to errors connected to marker recognition (e.g., occlusion, leaving the scene, or mislabeling). These errors are then corrected in the software, but the process is not perfect, resulting in artifact distortions. In this article, we examine four existing types of artifacts and propose a method for detection and classification of the distortions. The algorithm is based on the derivative analysis, low-pass filtering, mathematical morphology, and loose predictor. The tests involved multiple simulations using synthetically-distorted sequences, performance comparisons to human operators (concerning real life data), and an applicability analysis for the distortion removal. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

21 pages, 16119 KiB  
Article
Facial Motion Analysis beyond Emotional Expressions
by Manuel Porta-Lorenzo, Manuel Vázquez-Enríquez, Ania Pérez-Pérez, José Luis Alba-Castro and Laura Docío-Fernández
Sensors 2022, 22(10), 3839; https://doi.org/10.3390/s22103839 - 19 May 2022
Cited by 4 | Viewed by 2015
Abstract
Facial motion analysis is a research field with many practical applications, and has been strongly developed in the last years. However, most effort has been focused on the recognition of basic facial expressions of emotion and neglects the analysis of facial motions related [...] Read more.
Facial motion analysis is a research field with many practical applications, and has been strongly developed in the last years. However, most effort has been focused on the recognition of basic facial expressions of emotion and neglects the analysis of facial motions related to non-verbal communication signals. This paper focuses on the classification of facial expressions that are of the utmost importance in sign languages (Grammatical Facial Expressions) but also present in expressive spoken language. We have collected a dataset of Spanish Sign Language sentences and extracted the intervals for three types of Grammatical Facial Expressions: negation, closed queries and open queries. A study of several deep learning models using different input features on the collected dataset (LSE_GFE) and an external dataset (BUHMAP) shows that GFEs can be learned reliably with Graph Convolutional Networks simply fed with face landmarks. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

25 pages, 7448 KiB  
Article
Evaluating Automatic Body Orientation Detection for Indoor Location from Skeleton Tracking Data to Detect Socially Occupied Spaces Using the Kinect v2, Azure Kinect and Zed 2i
by Violeta Ana Luz Sosa-León and Angela Schwering
Sensors 2022, 22(10), 3798; https://doi.org/10.3390/s22103798 - 17 May 2022
Cited by 7 | Viewed by 3435
Abstract
Analysing the dynamics in social interactions in indoor spaces entails evaluating spatial–temporal variables from the event, such as location and time. Additionally, social interactions include invisible spaces that we unconsciously acknowledge due to social constraints, e.g., space between people having a conversation with [...] Read more.
Analysing the dynamics in social interactions in indoor spaces entails evaluating spatial–temporal variables from the event, such as location and time. Additionally, social interactions include invisible spaces that we unconsciously acknowledge due to social constraints, e.g., space between people having a conversation with each other. Nevertheless, current sensor arrays focus on detecting the physically occupied spaces from social interactions, i.e., areas inhabited by physically measurable objects. Our goal is to detect the socially occupied spaces, i.e., spaces not physically occupied by subjects and objects but inhabited by the interaction they sustain. We evaluate the social representation of the space structure between two or more active participants, so-called F-Formation for small gatherings. We propose calculating body orientation and location from skeleton joint data sets by integrating depth cameras. The body orientation is derived by integrating the shoulders and spine joint data with head/face rotation data and spatial–temporal information from trajectories. From the physically occupied measurements, we can detect socially occupied spaces. In our user study implementing the system, we compared the capabilities and skeleton tracking datasets from three depth camera sensors, the Kinect v2, Azure Kinect, and Zed 2i. We collected 32 walking patterns for individual and dyad configurations and evaluated the system’s accuracy regarding the intended and socially accepted orientations. Experimental results show accuracy above 90% for the Kinect v2, 96% for the Azure Kinect, and 89% for the Zed 2i for assessing socially relevant body orientation. Our algorithm contributes to the anonymous and automated assessment of socially occupied spaces. The depth sensor system is promising in detecting more complex social structures. These findings impact research areas that study group interactions within complex indoor settings. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

22 pages, 13087 KiB  
Article
Pattern Recognition of EMG Signals by Machine Learning for the Control of a Manipulator Robot
by Francisco Pérez-Reynoso, Neín Farrera-Vazquez, César Capetillo, Nestor Méndez-Lozano, Carlos González-Gutiérrez and Emmanuel López-Neri
Sensors 2022, 22(9), 3424; https://doi.org/10.3390/s22093424 - 30 Apr 2022
Cited by 9 | Viewed by 4648
Abstract
Human Machine Interfaces (HMI) principles are for the development of interfaces for assistance or support systems in physiotherapy or rehabilitation processes. One of the main problems is the degree of customization when applying some rehabilitation therapy or when adapting an assistance system to [...] Read more.
Human Machine Interfaces (HMI) principles are for the development of interfaces for assistance or support systems in physiotherapy or rehabilitation processes. One of the main problems is the degree of customization when applying some rehabilitation therapy or when adapting an assistance system to the individual characteristics of the users. To solve this inconvenience, it is proposed to implement a database of surface Electromyography (sEMG) of a channel in healthy individuals for pattern recognition through Neural Networks of contraction in the muscular region of the biceps brachii. Each movement is labeled using the One-Hot Encoding technique, which activates a state machine to control the position of an anthropomorphic manipulator robot and validate the response time of the designed HMI. Preliminary results show that the learning curve decreases when customizing the interface. The developed system uses muscle contraction to direct the position of the end effector of a virtual robot. The classification of Electromyography (EMG) signals is obtained to generate trajectories in real time by designing a test platform in LabVIEW. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

20 pages, 3279 KiB  
Article
Augmentation of Human Action Datasets with Suboptimal Warping and Representative Data Samples
by Dawid Warchoł and Mariusz Oszust
Sensors 2022, 22(8), 2947; https://doi.org/10.3390/s22082947 - 12 Apr 2022
Cited by 1 | Viewed by 1258
Abstract
The popularity of action recognition (AR) approaches and the need for improvement of their effectiveness require the generation of artificial samples addressing the nonlinearity of the time-space, scarcity of data points, or their variability. Therefore, in this paper, a novel approach to time [...] Read more.
The popularity of action recognition (AR) approaches and the need for improvement of their effectiveness require the generation of artificial samples addressing the nonlinearity of the time-space, scarcity of data points, or their variability. Therefore, in this paper, a novel approach to time series augmentation is proposed. The method improves the suboptimal warped time series generator algorithm (SPAWNER), introducing constraints based on identified AR-related problems with generated data points. Specifically, the proposed ARSPAWNER removes potential new time series that do not offer additional knowledge to the examples of a class or are created far from the occupied area. The constraints are based on statistics of time series of AR classes and their representative examples inferred with dynamic time warping barycentric averaging technique (DBA). The extensive experiments performed on eight AR datasets using three popular time series classifiers reveal the superiority of the introduced method over related approaches. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

15 pages, 3136 KiB  
Article
Markerless vs. Marker-Based Gait Analysis: A Proof of Concept Study
by Matteo Moro, Giorgia Marchesi, Filip Hesse, Francesca Odone and Maura Casadio
Sensors 2022, 22(5), 2011; https://doi.org/10.3390/s22052011 - 04 Mar 2022
Cited by 31 | Viewed by 6471
Abstract
The analysis of human gait is an important tool in medicine and rehabilitation to evaluate the effects and the progression of neurological diseases resulting in neuromotor disorders. In these fields, the gold standard techniques adopted to perform gait analysis rely on motion capture [...] Read more.
The analysis of human gait is an important tool in medicine and rehabilitation to evaluate the effects and the progression of neurological diseases resulting in neuromotor disorders. In these fields, the gold standard techniques adopted to perform gait analysis rely on motion capture systems and markers. However, these systems present drawbacks: they are expensive, time consuming and they can affect the naturalness of the motion. For these reasons, in the last few years, considerable effort has been spent to study and implement markerless systems based on videography for gait analysis. Unfortunately, only few studies quantitatively compare the differences between markerless and marker-based systems in 3D settings. This work presented a new RGB video-based markerless system leveraging computer vision and deep learning to perform 3D gait analysis. These results were compared with those obtained by a marker-based motion capture system. To this end, we acquired simultaneously with the two systems a multimodal dataset of 16 people repeatedly walking in an indoor environment. With the two methods we obtained similar spatio-temporal parameters. The joint angles were comparable, except for a slight underestimation of the maximum flexion for ankle and knee angles. Taking together these results highlighted the possibility to adopt markerless technique for gait analysis. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

18 pages, 448 KiB  
Article
Application of Fuzzy and Rough Logic to Posture Recognition in Fall Detection System
by Barbara Pȩkala, Teresa Mroczek, Dorota Gil and Michal Kepski
Sensors 2022, 22(4), 1602; https://doi.org/10.3390/s22041602 - 18 Feb 2022
Cited by 10 | Viewed by 1688
Abstract
Considering that the population is aging rapidly, the demand for technology for aging-at-home, which can provide reliable, unobtrusive monitoring of human activity, is expected to expand. This research focuses on improving the solution of the posture detection problem, which is a part of [...] Read more.
Considering that the population is aging rapidly, the demand for technology for aging-at-home, which can provide reliable, unobtrusive monitoring of human activity, is expected to expand. This research focuses on improving the solution of the posture detection problem, which is a part of fall detection system. Fall detection, using depth maps obtained by the Microsoft Kinect sensor, is a two-stage method. We concentrate on the first stage of the system, that is, pose recognition from a depth map. For lying pose detection, a new hybrid FRSystem is proposed. In the system, two rule sets are investigated, the first one created based on a domain knowledge and the second induced based on the rough set theory. Additionally, two inference aggregation approaches are considered with and without the knowledge measure. The results indicate that the new axiomatic definition of knowledge measures, which we propose has a positive impact on the effectiveness of inference and the rule induction method reducing the number of rules in a set maintains it. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

21 pages, 4731 KiB  
Article
Automatic and Efficient Fall Risk Assessment Based on Machine Learning
by Nadav Eichler, Shmuel Raz, Adi Toledano-Shubi, Daphna Livne, Ilan Shimshoni and Hagit Hel-Or
Sensors 2022, 22(4), 1557; https://doi.org/10.3390/s22041557 - 17 Feb 2022
Cited by 10 | Viewed by 3294
Abstract
Automating fall risk assessment, in an efficient, non-invasive manner, specifically in the elderly population, serves as an efficient means for implementing wide screening of individuals for fall risk and determining their need for participation in fall prevention programs. We present an automated and [...] Read more.
Automating fall risk assessment, in an efficient, non-invasive manner, specifically in the elderly population, serves as an efficient means for implementing wide screening of individuals for fall risk and determining their need for participation in fall prevention programs. We present an automated and efficient system for fall risk assessment based on a multi-depth camera human motion tracking system, which captures patients performing the well-known and validated Berg Balance Scale (BBS). Trained machine learning classifiers predict the patient’s 14 scores of the BBS by extracting spatio-temporal features from the captured human motion records. Additionally, we used machine learning tools to develop fall risk predictors that enable reducing the number of BBS tasks required to assess fall risk, from 14 to 4–6 tasks, without compromising the quality and accuracy of the BBS assessment. The reduced battery, termed Efficient-BBS (E-BBS), can be performed by physiotherapists in a traditional setting or deployed using our automated system, allowing an efficient and effective BBS evaluation. We report on a pilot study, run in a major hospital, including accuracy and statistical evaluations. We show the accuracy and confidence levels of the E-BBS, as well as the average number of BBS tasks required to reach the accuracy thresholds. The trained E-BBS system was shown to reduce the number of tasks in the BBS test by approximately 50% while maintaining 97% accuracy. The presented approach enables a wide screening of individuals for fall risk in a manner that does not require significant time or resources from the medical community. Furthermore, the technology and machine learning algorithms can be implemented on other batteries of medical tests and evaluations. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

24 pages, 8401 KiB  
Article
Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion
by Seemab Khan, Muhammad Attique Khan, Majed Alhaisoni, Usman Tariq, Hwan-Seung Yong, Ammar Armghan and Fayadh Alenezi
Sensors 2021, 21(23), 7941; https://doi.org/10.3390/s21237941 - 28 Nov 2021
Cited by 45 | Viewed by 3962
Abstract
Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision [...] Read more.
Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

26 pages, 2094 KiB  
Article
Gap Reconstruction in Optical Motion Capture Sequences Using Neural Networks
by Przemysław Skurowski and Magdalena Pawlyta
Sensors 2021, 21(18), 6115; https://doi.org/10.3390/s21186115 - 12 Sep 2021
Cited by 4 | Viewed by 4938
Abstract
Optical motion capture is a mature contemporary technique for the acquisition of motion data; alas, it is non-error-free. Due to technical limitations and occlusions of markers, gaps might occur in such recordings. The article reviews various neural network architectures applied to the gap-filling [...] Read more.
Optical motion capture is a mature contemporary technique for the acquisition of motion data; alas, it is non-error-free. Due to technical limitations and occlusions of markers, gaps might occur in such recordings. The article reviews various neural network architectures applied to the gap-filling problem in motion capture sequences within the FBM framework providing a representation of body kinematic structure. The results are compared with interpolation and matrix completion methods. We found out that, for longer sequences, simple linear feedforward neural networks can outperform the other, sophisticated architectures, but these outcomes might be affected by the small amount of data availabe for training. We were also able to identify that the acceleration and monotonicity of input sequence are the parameters that have a notable impact on the obtained results. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

18 pages, 13093 KiB  
Article
Attention-Based 3D Human Pose Sequence Refinement Network
by Do-Yeop Kim and Ju-Yong Chang
Sensors 2021, 21(13), 4572; https://doi.org/10.3390/s21134572 - 03 Jul 2021
Cited by 6 | Viewed by 3104
Abstract
Three-dimensional human mesh reconstruction from a single video has made much progress in recent years due to the advances in deep learning. However, previous methods still often reconstruct temporally noisy pose and mesh sequences given in-the-wild video data. To address this problem, we [...] Read more.
Three-dimensional human mesh reconstruction from a single video has made much progress in recent years due to the advances in deep learning. However, previous methods still often reconstruct temporally noisy pose and mesh sequences given in-the-wild video data. To address this problem, we propose a human pose refinement network (HPR-Net) based on a non-local attention mechanism. The pipeline of the proposed framework consists of a weight-regression module, a weighted-averaging module, and a skinned multi-person linear (SMPL) module. First, the weight-regression module creates pose affinity weights from a 3D human pose sequence represented in a unit quaternion form. Next, the weighted-averaging module generates a refined 3D pose sequence by performing temporal weighted averaging using the generated affinity weights. Finally, the refined pose sequence is converted into a human mesh sequence using the SMPL module. HPR-Net is a simple but effective post-processing network that can substantially improve the accuracy and temporal smoothness of 3D human mesh sequences obtained from an input video by existing human mesh reconstruction methods. Our experiments show that the noisy results of the existing methods are consistently improved using the proposed method on various real datasets. Notably, our proposed method reduces the pose and acceleration errors of VIBE, the existing state-of-the-art human mesh reconstruction method, by 1.4% and 66.5%, respectively, on the 3DPW dataset. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

16 pages, 6671 KiB  
Article
Design of a Plantar Pressure Insole Measuring System Based on Modular Photoelectric Pressure Sensor Unit
by Bin Ren and Jianwei Liu
Sensors 2021, 21(11), 3780; https://doi.org/10.3390/s21113780 - 29 May 2021
Cited by 10 | Viewed by 4451
Abstract
Accurately perceiving and predicting the parameters related to human walking is very important for man–machine coupled cooperative control systems such as exoskeletons and power prostheses. Plantar pressure data is rich in human gait and posture information and is an essential source of reference [...] Read more.
Accurately perceiving and predicting the parameters related to human walking is very important for man–machine coupled cooperative control systems such as exoskeletons and power prostheses. Plantar pressure data is rich in human gait and posture information and is an essential source of reference information as the input of the exoskeleton control system. Therefore, the proper design of the pressure sensing insole and validation is a big challenge considering the requirements such as convenience, reliability, no interference and so on. In this research, we developed a low-cost modular sensing unit based on the principle of photoelectric sensing and designed a plantar pressure sensing insole to achieve the purpose of sensing human walking gait and posture information. On the one hand, the sensor unit is made of economy-friendly commercial flexible circuits and elastic silicone, and the mechanical and electrical characteristics of the modular sensor unit are evaluated by a self-developed pressure-related calibration system. The calibration results show that the modular sensor based on the photoelectric sensing principle has fast response and negligible hysteresis. On the other hand, we analyzed the area where the plantar pressure is densely distributed. One benefit of the modular sensing unit design is that it is rather convenient to fabricate different insole solutions, so we fabricated and compared several pressure-sensitive insole solutions in this preliminary study. During the dynamic locomotion experiments of wearing the pressure-sensing insole, the time series signal of each sensor unit was collected and analyzed. The results show that the pressure sensing insole based on the photoelectric effect can sense the distribution of the plantar pressure by capturing the deformation of the insole caused by the foot contact during locomotion, and provide reliable gait information for wearable applications. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

30 pages, 2076 KiB  
Article
A Baseline for Cross-Database 3D Human Pose Estimation
by Michał Rapczyński, Philipp Werner, Sebastian Handrich and Ayoub Al-Hamadi
Sensors 2021, 21(11), 3769; https://doi.org/10.3390/s21113769 - 28 May 2021
Cited by 15 | Viewed by 3884
Abstract
Vision-based 3D human pose estimation approaches are typically evaluated on datasets that are limited in diversity regarding many factors, e.g., subjects, poses, cameras, and lighting. However, for real-life applications, it would be desirable to create systems that work under arbitrary conditions (“in-the-wild”). To [...] Read more.
Vision-based 3D human pose estimation approaches are typically evaluated on datasets that are limited in diversity regarding many factors, e.g., subjects, poses, cameras, and lighting. However, for real-life applications, it would be desirable to create systems that work under arbitrary conditions (“in-the-wild”). To advance towards this goal, we investigated the commonly used datasets HumanEva-I, Human3.6M, and Panoptic Studio, discussed their biases (that is, their limitations in diversity), and illustrated them in cross-database experiments (for which we used a surrogate for roughly estimating in-the-wild performance). For this purpose, we first harmonized the differing skeleton joint definitions of the datasets, reducing the biases and systematic test errors in cross-database experiments. We further proposed a scale normalization method that significantly improved generalization across camera viewpoints, subjects, and datasets. In additional experiments, we investigated the effect of using more or less cameras, training with multiple datasets, applying a proposed anatomy-based pose validation step, and using OpenPose as the basis for the 3D pose estimation. The experimental results showed the usefulness of the joint harmonization, of the scale normalization, and of augmenting virtual cameras to significantly improve cross-database and in-database generalization. At the same time, the experiments showed that there were dataset biases that could not be compensated and call for new datasets covering more diversity. We discussed our results and promising directions for future work. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

26 pages, 4352 KiB  
Article
Non-Contact Monitoring and Classification of Breathing Pattern for the Supervision of People Infected by COVID-19
by Ariana Tulus Purnomo, Ding-Bing Lin, Tjahjo Adiprabowo and Willy Fitra Hendria
Sensors 2021, 21(9), 3172; https://doi.org/10.3390/s21093172 - 03 May 2021
Cited by 36 | Viewed by 5789
Abstract
During the pandemic of coronavirus disease-2019 (COVID-19), medical practitioners need non-contact devices to reduce the risk of spreading the virus. People with COVID-19 usually experience fever and have difficulty breathing. Unsupervised care to patients with respiratory problems will be the main reason for [...] Read more.
During the pandemic of coronavirus disease-2019 (COVID-19), medical practitioners need non-contact devices to reduce the risk of spreading the virus. People with COVID-19 usually experience fever and have difficulty breathing. Unsupervised care to patients with respiratory problems will be the main reason for the rising death rate. Periodic linearly increasing frequency chirp, known as frequency-modulated continuous wave (FMCW), is one of the radar technologies with a low-power operation and high-resolution detection which can detect any tiny movement. In this study, we use FMCW to develop a non-contact medical device that monitors and classifies the breathing pattern in real time. Patients with a breathing disorder have an unusual breathing characteristic that cannot be represented using the breathing rate. Thus, we created an Xtreme Gradient Boosting (XGBoost) classification model and adopted Mel-frequency cepstral coefficient (MFCC) feature extraction to classify the breathing pattern behavior. XGBoost is an ensemble machine-learning technique with a fast execution time and good scalability for predictions. In this study, MFCC feature extraction assists machine learning in extracting the features of the breathing signal. Based on the results, the system obtained an acceptable accuracy. Thus, our proposed system could potentially be used to detect and monitor the presence of respiratory problems in patients with COVID-19, asthma, etc. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

14 pages, 454 KiB  
Article
Combined Regularized Discriminant Analysis and Swarm Intelligence Techniques for Gait Recognition
by Tomasz Krzeszowski and Krzysztof Wiktorowicz
Sensors 2020, 20(23), 6794; https://doi.org/10.3390/s20236794 - 27 Nov 2020
Cited by 2 | Viewed by 2453
Abstract
In the gait recognition problem, most studies are devoted to developing gait descriptors rather than introducing new classification methods. This paper proposes hybrid methods that combine regularized discriminant analysis (RDA) and swarm intelligence techniques for gait recognition. The purpose of this study is [...] Read more.
In the gait recognition problem, most studies are devoted to developing gait descriptors rather than introducing new classification methods. This paper proposes hybrid methods that combine regularized discriminant analysis (RDA) and swarm intelligence techniques for gait recognition. The purpose of this study is to develop strategies that will achieve better gait recognition results than those achieved by classical classification methods. In our approach, particle swarm optimization (PSO), grey wolf optimization (GWO), and whale optimization algorithm (WOA) are used. These techniques tune the observation weights and hyperparameters of the RDA method to minimize the objective function. The experiments conducted on the GPJATK dataset proved the validity of the proposed concept. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

20 pages, 2800 KiB  
Review
Applications of Pose Estimation in Human Health and Performance across the Lifespan
by Jan Stenum, Kendra M. Cherry-Allen, Connor O. Pyles, Rachel D. Reetzke, Michael F. Vignos and Ryan T. Roemmich
Sensors 2021, 21(21), 7315; https://doi.org/10.3390/s21217315 - 03 Nov 2021
Cited by 43 | Viewed by 9364
Abstract
The emergence of pose estimation algorithms represents a potential paradigm shift in the study and assessment of human movement. Human pose estimation algorithms leverage advances in computer vision to track human movement automatically from simple videos recorded using common household devices with relatively [...] Read more.
The emergence of pose estimation algorithms represents a potential paradigm shift in the study and assessment of human movement. Human pose estimation algorithms leverage advances in computer vision to track human movement automatically from simple videos recorded using common household devices with relatively low-cost cameras (e.g., smartphones, tablets, laptop computers). In our view, these technologies offer clear and exciting potential to make measurement of human movement substantially more accessible; for example, a clinician could perform a quantitative motor assessment directly in a patient’s home, a researcher without access to expensive motion capture equipment could analyze movement kinematics using a smartphone video, and a coach could evaluate player performance with video recordings directly from the field. In this review, we combine expertise and perspectives from physical therapy, speech-language pathology, movement science, and engineering to provide insight into applications of pose estimation in human health and performance. We focus specifically on applications in areas of human development, performance optimization, injury prevention, and motor assessment of persons with neurologic damage or disease. We review relevant literature, share interdisciplinary viewpoints on future applications of these technologies to improve human health and performance, and discuss perceived limitations. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

Back to TopTop