sensors-logo

Journal Browser

Journal Browser

Special Issue "Inertial Sensors for Activity Recognition and Classification"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 December 2019).

Special Issue Editors

Dr. Angelo Maria Sabatini
Website
Guest Editor
Scuola Superiore Sant'Anna di Studi Universitari e di Perfezionamento, Pisa, Italy
Interests: wearable sensor systems for human motion capture; magneto-inertial measurement units; computational methods for wearable sensor systems; multisensor fusion
Special Issues and Collections in MDPI journals
Dr. Andrea Mannini
Website
Guest Editor
The BioRobotics Institute, Scuola Superiore Sant’Anna, Piazza Martiri della Libertà 33, 56124 Pisa, Italy
Interests: wearable sensors; machine learning; activity recognition; inertial sensors; movement analysis; gait parameters estimation; automatic early detection of gait alterations; sports bioengineering; mobile health
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Inertial sensors and inertial measurement units are witnessing increasing interest among practitioners in several related technical fields that require the capability of analyzing human movement patterns: biomechanics, clinical biomechanics, sport, physical medicine and rehabilitation, and telehealth, to name just a few.

The interest is also due to the widespread availability of personal devices with embedded sensors (i.e., smartphones, smartwatches) or, more generally, of wearable sensor systems, which both offer unique opportunities in terms of acquisition, logging, processing, and transmitting movement data.

A common feature to several applications in the mentioned fields is the capability of performing tasks of automatic recognition (i.e., understanding the activity being performed) and classification (i.e., understanding how the recognized activity is performed), especially in ecological conditions. These conditions recur when the activities of interest are carried out without any undue constraint affecting their execution by persons who can be either healthy or affected by movement disorders.

Although significant improvements have been made, especially in the past few years, a number of issues still remain to be solved, either technologically or perhaps more urgently, methodologically.

This Special Issue aims to highlight advances in the development and validation of computational methods that specifically address the problem of automatic recognition and classification of human movement patterns using inertial sensors, without restrictions for the prospective applications. Topics include, but are not limited, to:

  • Machine learning methods for automatic recognition and classification;
  • Novel methodologies for personalized/adaptive classifier training and validation in ecological conditions;
  • Sensor fusion and multi-sensor integration methods for the extraction of relevant signal features;
  • Detection of specific events or conditions, such as gestures, falls, gait disturbances, etc.;
  • Signal processing for biofeedback and actuation.

Prof. Angelo Maria Sabatini
Dr. Andrea Mannini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • inertial sensors
  • accelerometers and gyroscopes
  • motion capture
  • machine learning
  • sensor fusion

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
An Open-Source 7-DOF Wireless Human Arm Motion-Tracking System for Use in Robotics Research
Sensors 2020, 20(11), 3082; https://doi.org/10.3390/s20113082 - 29 May 2020
Abstract
To extend the choice of inertial motion-tracking systems freely available to researchers and educators, this paper presents an alternative open-source design of a wearable 7-DOF wireless human arm motion-tracking system. Unlike traditional inertial motion-capture systems, the presented system employs a hybrid combination of [...] Read more.
To extend the choice of inertial motion-tracking systems freely available to researchers and educators, this paper presents an alternative open-source design of a wearable 7-DOF wireless human arm motion-tracking system. Unlike traditional inertial motion-capture systems, the presented system employs a hybrid combination of two inertial measurement units and one potentiometer for tracking a single arm. The sequence of three design phases described in the paper demonstrates how the general concept of a portable human arm motion-tracking system was transformed into an actual prototype, by employing a modular approach with independent wireless data transmission to a control PC for signal processing and visualization. Experimental results, together with an application case study on real-time robot-manipulator teleoperation, confirm the applicability of the developed arm motion-tracking system for facilitating robotics research. The presented arm-tracking system also has potential to be employed in mechatronic system design education and related research activities. The system CAD design models and program codes are publicly available online and can be used by robotics researchers and educators as a design platform to build their own arm-tracking solutions for research and educational purposes. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Wearable Sensor-Based Gait Analysis for Age and Gender Estimation
Sensors 2020, 20(8), 2424; https://doi.org/10.3390/s20082424 - 24 Apr 2020
Abstract
Wearable sensor-based systems and devices have been expanded in different application domains, especially in the healthcare arena. Automatic age and gender estimation has several important applications. Gait has been demonstrated as a profound motion cue for various applications. A gait-based age and gender [...] Read more.
Wearable sensor-based systems and devices have been expanded in different application domains, especially in the healthcare arena. Automatic age and gender estimation has several important applications. Gait has been demonstrated as a profound motion cue for various applications. A gait-based age and gender estimation challenge was launched in the 12th IAPR International Conference on Biometrics (ICB), 2019. In this competition, 18 teams initially registered from 14 countries. The goal of this challenge was to find some smart approaches to deal with age and gender estimation from sensor-based gait data. For this purpose, we employed a large wearable sensor-based gait dataset, which has 745 subjects (357 females and 388 males), from 2 to 78 years old in the training dataset; and 58 subjects (19 females and 39 males) in the test dataset. It has several walking patterns. The gait data sequences were collected from three IMUZ sensors, which were placed on waist-belt or at the top of a backpack. There were 67 solutions from ten teams—for age and gender estimation. This paper extensively analyzes the methods and achieved-results from various approaches. Based on analysis, we found that deep learning-based solutions lead the competitions compared with conventional handcrafted methods. We found that the best result achieved 24.23% prediction error for gender estimation, and 5.39 mean absolute error for age estimation by employing angle embedded gait dynamic image and temporal convolution network. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Using Domain Knowledge for Interpretable and Competitive Multi-Class Human Activity Recognition
Sensors 2020, 20(4), 1208; https://doi.org/10.3390/s20041208 - 22 Feb 2020
Abstract
Human activity recognition (HAR) has become an increasingly popular application of machine learning across a range of domains. Typically the HAR task that a machine learning algorithm is trained for requires separating multiple activities such as walking, running, sitting, and falling from each [...] Read more.
Human activity recognition (HAR) has become an increasingly popular application of machine learning across a range of domains. Typically the HAR task that a machine learning algorithm is trained for requires separating multiple activities such as walking, running, sitting, and falling from each other. Despite a large body of work on multi-class HAR, and the well-known fact that the performance on a multi-class problem can be significantly affected by how it is decomposed into a set of binary problems, there has been little research into how the choice of multi-class decomposition method affects the performance of HAR systems. This paper presents the first empirical comparison of multi-class decomposition methods in a HAR context by estimating the performance of five machine learning algorithms when used in their multi-class formulation, with four popular multi-class decomposition methods, five expert hierarchies—nested dichotomies constructed from domain knowledge—or an ensemble of expert hierarchies on a 17-class HAR data-set which consists of features extracted from tri-axial accelerometer and gyroscope signals. We further compare performance on two binary classification problems, each based on the topmost dichotomy of an expert hierarchy. The results show that expert hierarchies can indeed compete with one-vs-all, both on the original multi-class problem and on a more general binary classification problem, such as that induced by an expert hierarchy’s topmost dichotomy. Finally, we show that an ensemble of expert hierarchies performs better than one-vs-all and comparably to one-vs-one, despite being of lower time and space complexity, on the multi-class problem, and outperforms all other multi-class decomposition methods on the two dichotomous problems. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Zero-Shot Human Activity Recognition Using Non-Visual Sensors
Sensors 2020, 20(3), 825; https://doi.org/10.3390/s20030825 - 04 Feb 2020
Cited by 1
Abstract
Due to significant advances in sensor technology, studies towards activity recognition have gained interest and maturity in the last few years. Existing machine learning algorithms have demonstrated promising results by classifying activities whose instances have been already seen during training. Activity recognition methods [...] Read more.
Due to significant advances in sensor technology, studies towards activity recognition have gained interest and maturity in the last few years. Existing machine learning algorithms have demonstrated promising results by classifying activities whose instances have been already seen during training. Activity recognition methods based on real-life settings should cover a growing number of activities in various domains, whereby a significant part of instances will not be present in the training data set. However, to cover all possible activities in advance is a complex and expensive task. Concretely, we need a method that can extend the learning model to detect unseen activities without prior knowledge regarding sensor readings about those previously unseen activities. In this paper, we introduce an approach to leverage sensor data in discovering new unseen activities which were not present in the training set. We show that sensor readings can lead to promising results for zero-shot learning, whereby the necessary knowledge can be transferred from seen to unseen activities by using semantic similarity. The evaluation conducted on two data sets extracted from the well-known CASAS datasets show that the proposed zero-shot learning approach achieves a high performance in recognizing unseen (i.e., not present in the training dataset) new activities. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Real-time Smartphone Activity Classification Using Inertial Sensors—Recognition of Scrolling, Typing, and Watching Videos While Sitting or Walking
Sensors 2020, 20(3), 655; https://doi.org/10.3390/s20030655 - 24 Jan 2020
Cited by 1
Abstract
By developing awareness of smartphone activities that the user is performing on their smartphone, such as scrolling feeds, typing and watching videos, we can develop application features that are beneficial to the users, such as personalization. It is currently not possible to access [...] Read more.
By developing awareness of smartphone activities that the user is performing on their smartphone, such as scrolling feeds, typing and watching videos, we can develop application features that are beneficial to the users, such as personalization. It is currently not possible to access real-time smartphone activities directly, due to standard smartphone privileges and if internal movement sensors can detect them, there may be implications for access policies. Our research seeks to understand whether the sensor data from existing smartphone inertial measurement unit (IMU) sensors (triaxial accelerometers, gyroscopes and magnetometers) can be used to classify typical human smartphone activities. We designed and conducted a study with human participants which uses an Android app to collect motion data during scrolling, typing and watching videos, while walking or seated and the baseline of smartphone non-use, while sitting and walking. We then trained a machine learning (ML) model to perform real-time activity recognition of those eight states. We investigated various algorithms and parameters for the best accuracy. Our optimal solution achieved an accuracy of 78.6% with the Extremely Randomized Trees algorithm, data sampled at 50 Hz and 5-s windows. We conclude by discussing the viability of using IMU sensors to recognize common smartphone activities. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Double-Tap Interaction as an Actuation Mechanism for On-Demand Cueing in Parkinson’s Disease
Sensors 2019, 19(23), 5167; https://doi.org/10.3390/s19235167 - 26 Nov 2019
Abstract
Freezing of Gait (FoG) is one of the most debilitating symptoms of Parkinson’s disease and is an important contributor to falls. When the management of freezing episodes cannot be achieved through medication or surgery, non-pharmacological methods, such as cueing, have emerged as effective [...] Read more.
Freezing of Gait (FoG) is one of the most debilitating symptoms of Parkinson’s disease and is an important contributor to falls. When the management of freezing episodes cannot be achieved through medication or surgery, non-pharmacological methods, such as cueing, have emerged as effective techniques, which ameliorates FoG. The use of On-Demand cueing systems (systems that only provide cueing stimuli during a FoG episode) has received attention in recent years. For such systems, the most common method of triggering the onset of cueing stimuli, utilize autonomous real-time FoG detection algorithms. In this article, we assessed the potential of a simple double-tap gesture interaction to trigger the onset of cueing stimuli. The intended purpose of our study was to validate the use of double-tap gesture interaction to facilitate Self-activated On-Demand cueing. We present analyses that assess if PwP can perform a double-tap gesture, if the gesture can be detected using an accelerometer’s embedded gestural interaction recognition function and if the action of performing the gesture aggravates FoG episodes. Our results demonstrate that a double-tap gesture may provide an effective actuation method for triggering On-Demand cueing. This opens up the potential future development of self-activated cueing devices as a method of On-Demand cueing for PwP and others. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Human Activities and Postures Recognition: From Inertial Measurements to Quaternion-Based Approaches
Sensors 2019, 19(19), 4058; https://doi.org/10.3390/s19194058 - 20 Sep 2019
Cited by 1
Abstract
This paper presents two approaches to assess the effect of the number of inertial sensors and their location placements on recognition of human postures and activities. Inertial and Magnetic Measurement Units (IMMUs)—which consist of a triad of three-axis accelerometer, three-axis gyroscope, and three-axis [...] Read more.
This paper presents two approaches to assess the effect of the number of inertial sensors and their location placements on recognition of human postures and activities. Inertial and Magnetic Measurement Units (IMMUs)—which consist of a triad of three-axis accelerometer, three-axis gyroscope, and three-axis magnetometer sensors—are used in this work. Five IMMUs are initially used and attached to different body segments. Placements of up to three IMMUs are then considered: back, left foot, and left thigh. The subspace k-nearest neighbors (KNN) classifier is used to achieve the supervised learning process and the recognition task. In a first approach, we feed raw data from three-axis accelerometer and three-axis gyroscope into the classifier without any filtering or pre-processing, unlike what is usually reported in the state-of-the-art where statistical features were computed instead. Results show the efficiency of this method for the recognition of the studied activities and postures. With the proposed algorithm, more than 80% of the activities and postures are correctly classified using one IMMU, placed on the lower back, left thigh, or left foot location, and more than 90% when combining all three placements. In a second approach, we extract attitude, in term of quaternion, from IMMUs in order to more precisely achieve the recognition process. The obtained accuracy results are compared to those obtained when only raw data is exploited. Results show that the use of attitude significantly improves the performance of the classifier, especially for certain specific activities. In that case, it was further shown that using a smaller number of features, with quaternion, in the recognition process leads to a lower computation time and better accuracy. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Validation of Wearable Sensors during Team Sport-Specific Movements in Indoor Environments
Sensors 2019, 19(16), 3458; https://doi.org/10.3390/s19163458 - 07 Aug 2019
Cited by 2
Abstract
The aim of this study was to determine possible influences, including data processing and sport-specific demands, on the validity of acceleration measures by an inertial measurement unit (IMU) in indoor environments. IMU outputs were compared to a three-dimensional (3D) motion analysis (MA) system [...] Read more.
The aim of this study was to determine possible influences, including data processing and sport-specific demands, on the validity of acceleration measures by an inertial measurement unit (IMU) in indoor environments. IMU outputs were compared to a three-dimensional (3D) motion analysis (MA) system and processed with two sensor fusion algorithms (Kalman filter, KF; Complementary filter, CF) at temporal resolutions of 100, 10, and 5 Hz. Athletes performed six team sport-specific movements whilst wearing a single IMU. Mean and peak acceleration magnitudes were analyzed. Over all trials (n = 1093), KF data overestimated MA resultant acceleration by 0.42 ± 0.31 m∙s−2 for mean and 4.18 ± 3.68 m∙s−2 for peak values, while CF processing showed errors of up to 0.57 ± 0.41 m∙s−2 and −2.31 ± 2.25 m∙s−2, respectively. Resampling to 5 Hz decreased the absolute error by about 14% for mean and 56% for peak values. Still, higher acceleration magnitudes led to a large increase in error. These results indicate that IMUs can be used for assessing accelerations in indoor team sports with acceptable means. Application of a CF and resampling to 5 Hz is recommended. High-acceleration magnitudes impair validity to a large degree and should be interpreted with caution. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Using Recurrent Neural Networks to Compare Movement Patterns in ADHD and Normally Developing Children Based on Acceleration Signals from the Wrist and Ankle
Sensors 2019, 19(13), 2935; https://doi.org/10.3390/s19132935 - 03 Jul 2019
Cited by 2
Abstract
Attention deficit and hyperactivity disorder (ADHD) is a neurodevelopmental condition that affects, among other things, the movement patterns of children suffering it. Inattention, hyperactivity and impulsive behaviors, major symptoms characterizing ADHD, result not only in differences in the activity levels but also in [...] Read more.
Attention deficit and hyperactivity disorder (ADHD) is a neurodevelopmental condition that affects, among other things, the movement patterns of children suffering it. Inattention, hyperactivity and impulsive behaviors, major symptoms characterizing ADHD, result not only in differences in the activity levels but also in the activity patterns themselves. This paper proposes and trains a Recurrent Neural Network (RNN) to characterize the moment patterns for normally developing children and uses the trained RNN in order to assess differences in the movement patterns from children with ADHD. Each child is monitored for 24 consecutive hours, in a normal school day, wearing 4 tri-axial accelerometers (one at each wrist and ankle). The results for both medicated and non-medicated children with ADHD, and for different activity levels are presented. While the movement patterns for non-medicated ADHD diagnosed participants showed higher differences as compared to those of normally developing participants, those differences were only statistically significant for medium intensity movements. On the other hand, the medicated ADHD participants showed statistically different behavior for low intensity movements. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Real Time Estimation of the Pose of a Lower Limb Prosthesis from a Single Shank Mounted IMU
Sensors 2019, 19(13), 2865; https://doi.org/10.3390/s19132865 - 27 Jun 2019
Cited by 2
Abstract
The command of a microprocessor-controlled lower limb prosthesis classically relies on the gait mode recognition. Real time computation of the pose of the prosthesis (i.e., attitude and trajectory) is useful for the correct identification of these modes. In this paper, we present and [...] Read more.
The command of a microprocessor-controlled lower limb prosthesis classically relies on the gait mode recognition. Real time computation of the pose of the prosthesis (i.e., attitude and trajectory) is useful for the correct identification of these modes. In this paper, we present and evaluate an algorithm for the computation of the pose of a lower limb prosthesis, under the constraints of real time applications and limited computing resources. This algorithm uses a nonlinear complementary filter with a variable gain to estimate the attitude of the shank. The trajectory is then computed from the double integration of the accelerometer data corrected from the kinematics of a model of inverted pendulum rolling on a curved arc foot. The results of the proposed algorithm are evaluated against the optoelectronic measurements of walking trials of three people with transfemoral amputation. The root mean square error (RMSE) of the estimated attitude is around 3°, close to the Kalman-based algorithm results reported in similar conditions. The real time correction of the integration of the inertial measurement unit (IMU) acceleration decreases the trajectory error by a factor of 2.5 compared to its direct integration which will result in an improvement of the gait mode recognition. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Real-Time Drink Trigger Detection in Free-living Conditions Using Inertial Sensors
Sensors 2019, 19(9), 2145; https://doi.org/10.3390/s19092145 - 09 May 2019
Cited by 4
Abstract
Despite the importance of maintaining an adequate hydration status, water intake is frequently neglected due to the fast pace of people’s lives. For the elderly, poor water intake can be even more concerning, not only due to the damaging impact of dehydration, but [...] Read more.
Despite the importance of maintaining an adequate hydration status, water intake is frequently neglected due to the fast pace of people’s lives. For the elderly, poor water intake can be even more concerning, not only due to the damaging impact of dehydration, but also since seniors’ hydration regulation mechanisms tend to be less efficient. This work focuses on the recognition of the pre-drinking hand-to-mouth movement (a drink trigger) with two main objectives: predict the occurrence of drinking events in real-time and free-living conditions, and assess the potential of using this method to trigger an external component for estimating the amount of fluid intake. This shall contribute towards the efficiency of more robust multimodal approaches addressing the problem of water intake monitoring. The system, based on a single inertial measurement unit placed on the forearm, is unobtrusive, user-independent, and lightweight enough for real-time mobile processing. Drinking events outside meal periods were detected with an F-score of 97% in an offline validation with data from 12 users, and 85% in a real-time free-living validation with five other subjects, using a random forest classifier. Our results also reveal that the algorithm first detects the hand-to-mouth movement 0.70 s before the occurrence of the actual sip of the drink, proving that this approach can have further applications and enable more robust and complete fluid intake monitoring solutions. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
On Placement, Location and Orientation of Wrist-Worn Tri-Axial Accelerometers during Free-Living Measurements
Sensors 2019, 19(9), 2095; https://doi.org/10.3390/s19092095 - 06 May 2019
Cited by 1
Abstract
Wearable accelerometers have recently become a standalone tool for the objective assessment of physical activity (PA). In free-living studies, accelerometers are placed by protocol on a pre-defined body location (e.g., non-dominant wrist). However, the protocol is not always followed, e.g., the sensor can [...] Read more.
Wearable accelerometers have recently become a standalone tool for the objective assessment of physical activity (PA). In free-living studies, accelerometers are placed by protocol on a pre-defined body location (e.g., non-dominant wrist). However, the protocol is not always followed, e.g., the sensor can be moved between wrists or reattached in a different orientation. Such protocol violations often result in PA miscalculation. We propose an approach, PLOE (“Placement, Location and Orientation Evaluation method”), to determine the sensor position using statistical features from the raw accelerometer measurements. We compare the estimated position with the study protocol and identify discrepancies. We apply PLOE to the measurements collected from 45 older adults who wore ActiGraph GT3X+ accelerometers on the left and right wrist for seven days. We found that 15.6% of participants who wore accelerometers violated the protocol for one or more days. The sensors were worn on the wrong hand during 6.9% of the days of simultaneous wearing of devices. During the periods of discrepancies, the daily PA was miscalculated by more than 20%. Our findings show that correct placement of the device has a significant effect on the PA estimates. These results demonstrate a need for the evaluation of sensor position. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Hidden Markov Model-Based Smart Annotation for Benchmark Cyclic Activity Recognition Database Using Wearables
Sensors 2019, 19(8), 1820; https://doi.org/10.3390/s19081820 - 16 Apr 2019
Cited by 4
Abstract
Activity monitoring using wearables is becoming ubiquitous, although accurate cycle level analysis, such as step-counting and gait analysis, are limited by a lack of realistic and labeled datasets. The effort required to obtain and annotate such datasets is massive, therefore we propose a [...] Read more.
Activity monitoring using wearables is becoming ubiquitous, although accurate cycle level analysis, such as step-counting and gait analysis, are limited by a lack of realistic and labeled datasets. The effort required to obtain and annotate such datasets is massive, therefore we propose a smart annotation pipeline which reduces the number of events needing manual adjustment to 14%. For scenarios dominated by walking, this annotation effort is as low as 8%. The pipeline consists of three smart annotation approaches, namely edge detection of the pressure data, local cyclicity estimation, and iteratively trained hierarchical hidden Markov models. Using this pipeline, we have collected and labeled a dataset with over 150,000 labeled cycles, each with 2 phases, from 80 subjects, which we have made publicly available. The dataset consists of 12 different task-driven activities, 10 of which are cyclic. These activities include not only straight and steady-state motions, but also transitions, different ranges of bouts, and changing directions. Each participant wore 5 synchronized inertial measurement units (IMUs) on the wrists, shoes, and in a pocket, as well as pressure insoles and video. We believe that this dataset and smart annotation pipeline are a good basis for creating a benchmark dataset for validation of other semi- and unsupervised algorithms. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Open AccessArticle
Recognition and Repetition Counting for Complex Physical Exercises with Deep Learning
Sensors 2019, 19(3), 714; https://doi.org/10.3390/s19030714 - 10 Feb 2019
Cited by 3
Abstract
Activity recognition using off-the-shelf smartwatches is an important problem in human activity recognition. In this paper, we present an end-to-end deep learning approach, able to provide probability distributions over activities from raw sensor data. We apply our methods to 10 complex full-body exercises [...] Read more.
Activity recognition using off-the-shelf smartwatches is an important problem in human activity recognition. In this paper, we present an end-to-end deep learning approach, able to provide probability distributions over activities from raw sensor data. We apply our methods to 10 complex full-body exercises typical in CrossFit, and achieve a classification accuracy of 99.96%. We additionally show that the same neural network used for exercise recognition can also be used in repetition counting. To the best of our knowledge, our approach to repetition counting is novel and performs well, counting correctly within an error of ±1 repetitions in 91% of the performed sets. Full article
(This article belongs to the Special Issue Inertial Sensors for Activity Recognition and Classification)
Show Figures

Figure 1

Back to TopTop