E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Smartphone-Based Sensors for Posture, Movement Analysis and Human Activity Recognition"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 November 2018)

Special Issue Editors

Guest Editor
Dr. Anthony Fleury

University of Lille, F-59000 Lille and IMT Lille Douai, URIA, F-59508 Douai, France
Website | E-Mail
Interests: signal processing; classification; activity recognition; human behavior analysis; smart homes; smartphones
Guest Editor
Prof. George Roussos

Department of Computer Science and Information Systems, Birkbeck College, University of London, United Kingdom
Website | E-Mail
Interests: social and pervasive computing; human dynamics; infrastructure services for the Internet of Things

Special Issue Information

Dear Colleagues,

Over the past decade, smartphones have become the focus of intense research interest due to three critical features: (1) billions of people worldwide regularly carry a smartphone since their price is affordable, (2) smartphones contain multiple sensors offering similar precision to those found in specialized devices and are almost continuously connected to the Internet, and (3) developers have low-cost and seamless access to smartphone platforms supporting the efficient development and distribution of applications.

As a result, smartphones have emerged as an efficient and effective measuring device widely used as a key research instrument. For example, using the embedded sensors it becomes possible to actively measure specific movements that an individual is asked to perform or attempt to infer from measurements obtained passively patterns of daily life. These tasks require methods and techniques that enable the interpretation of sensed data into movements, postures or activities over time.

This Special Issue solicits original research on the use of smartphone sensors for the estimation of posture, activities and movements. Authors are encouraged to submit original research articles, works in progress or surveys on the following topics of interest include (other relevant topics will also be considered):

  • Measurement algorithms and their application for the interpretation of higher level context
  • Movement analysis of specific body parts
  • Posture analysis and related issues
  • Serious games based on movement and sensor readings
  • Objective interpretation of data obtained from smartphones sensor analysis
  • Healthcare applications including injury prevention, rehabilitation and medication compliance

Dr. Anthony Fleury
Prof. George Roussos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Smartphone sensors
  • Human behavior analysis
  • Activity and action analysis

Published Papers (9 papers)

View options order results:
result details:
Displaying articles 1-9
Export citation of selected articles as:

Research

Open AccessArticle
A Semi-Automatic Annotation Approach for Human Activity Recognition
Sensors 2019, 19(3), 501; https://doi.org/10.3390/s19030501
Received: 30 November 2018 / Revised: 11 January 2019 / Accepted: 22 January 2019 / Published: 25 January 2019
PDF Full-text (1794 KB) | HTML Full-text | XML Full-text
Abstract
Modern smartphones and wearables often contain multiple embedded sensors which generate significant amounts of data. This information can be used for body monitoring-based areas such as healthcare, indoor location, user-adaptive recommendations and transportation. The development of Human Activity Recognition (HAR) algorithms involves the [...] Read more.
Modern smartphones and wearables often contain multiple embedded sensors which generate significant amounts of data. This information can be used for body monitoring-based areas such as healthcare, indoor location, user-adaptive recommendations and transportation. The development of Human Activity Recognition (HAR) algorithms involves the collection of a large amount of labelled data which should be annotated by an expert. However, the data annotation process on large datasets is expensive, time consuming and difficult to obtain. The development of a HAR approach which requires low annotation effort and still maintains adequate performance is a relevant challenge. We introduce a Semi-Supervised Active Learning (SSAL) based on Self-Training (ST) approach for Human Activity Recognition to partially automate the annotation process, reducing the annotation effort and the required volume of annotated data to obtain a high performance classifier. Our approach uses a criterion to select the most relevant samples for annotation by the expert and propagate their label to the most confident samples. We present a comprehensive study comparing supervised and unsupervised methods with our approach on two datasets composed of daily living activities. The results showed that it is possible to reduce the required annotated data by more than 89% while still maintaining an accurate model performance. Full article
Figures

Figure 1

Open AccessArticle
Comparison of Standard Clinical and Instrumented Physical Performance Tests in Discriminating Functional Status of High-Functioning People Aged 61–70 Years Old
Sensors 2019, 19(3), 449; https://doi.org/10.3390/s19030449
Received: 30 November 2018 / Revised: 18 January 2019 / Accepted: 19 January 2019 / Published: 22 January 2019
Cited by 1 | PDF Full-text (1793 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Assessment of physical performance by standard clinical tests such as the 30-s Chair Stand (30CST) and the Timed Up and Go (TUG) may allow early detection of functional decline, even in high-functioning populations, and facilitate preventive interventions. Inertial sensors are emerging to obtain [...] Read more.
Assessment of physical performance by standard clinical tests such as the 30-s Chair Stand (30CST) and the Timed Up and Go (TUG) may allow early detection of functional decline, even in high-functioning populations, and facilitate preventive interventions. Inertial sensors are emerging to obtain instrumented measures that can provide subtle details regarding the quality of the movement while performing such tests. We compared standard clinical with instrumented measures of physical performance in their ability to distinguish between high and very high functional status, stratified by the Late-Life Function and Disability Instrument (LLFDI). We assessed 160 participants from the PreventIT study (66.3 ± 2.4 years, 87 females, median LLFDI 72.31, range: 44.33–100) performing the 30CST and TUG while a smartphone was attached to their lower back. The number of 30CST repetitions and the stopwatch-based TUG duration were recorded. Instrumented features were computed from the smartphone embedded inertial sensors. Four logistic regression models were fitted and the Areas Under the Receiver Operating Curve (AUC) were calculated and compared using the DeLong test. Standard clinical and instrumented measures of 30CST both showed equal moderate discriminative ability of 0.68 (95%CI 0.60–0.76), p = 0.97. Similarly, for TUG: AUC was 0.68 (95%CI 0.60–0.77) and 0.65 (95%CI 0.56–0.73), respectively, p = 0.26. In conclusion, both clinical and instrumented measures, recorded through a smartphone, can discriminate early functional decline in healthy adults aged 61–70 years. Full article
Figures

Figure 1

Open AccessArticle
Navigating Virtual Environments Using Leg Poses and Smartphone Sensors
Sensors 2019, 19(2), 299; https://doi.org/10.3390/s19020299
Received: 1 November 2018 / Revised: 3 January 2019 / Accepted: 10 January 2019 / Published: 13 January 2019
PDF Full-text (3063 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Realization of navigation in virtual environments remains a challenge as it involves complex operating conditions. Decomposition of such complexity is attainable by fusion of sensors and machine learning techniques. Identifying the right combination of sensory information and the appropriate machine learning technique is [...] Read more.
Realization of navigation in virtual environments remains a challenge as it involves complex operating conditions. Decomposition of such complexity is attainable by fusion of sensors and machine learning techniques. Identifying the right combination of sensory information and the appropriate machine learning technique is a vital ingredient for translating physical actions to virtual movements. The contributions of our work include: (i) Synchronization of actions and movements using suitable multiple sensor units, and (ii) selection of the significant features and an appropriate algorithm to process them. This work proposes an innovative approach that allows users to move in virtual environments by simply moving their legs towards the desired direction. The necessary hardware includes only a smartphone that is strapped to the subjects’ lower leg. Data from the gyroscope, accelerometer and campus sensors of the mobile device are transmitted to a PC where the movement is accurately identified using a combination of machine learning techniques. Once the desired movement is identified, the movement of the virtual avatar in the virtual environment is realized. After pre-processing the sensor data using the box plot outliers approach, it is observed that Artificial Neural Networks provided the highest movement identification accuracy of 84.2% on the training dataset and 84.1% on testing dataset. Full article
Figures

Figure 1

Open AccessArticle
A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition
Sensors 2018, 18(11), 3726; https://doi.org/10.3390/s18113726
Received: 30 September 2018 / Revised: 18 October 2018 / Accepted: 23 October 2018 / Published: 1 November 2018
Cited by 1 | PDF Full-text (442 KB) | HTML Full-text | XML Full-text
Abstract
Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majority of the related works [...] Read more.
Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majority of the related works have proposed a position-dependent HAR, i.e., the target subject has to fix the smartphone in a pre-defined position. Few studies have tackled the problem of position-independent HAR. They have tackled the problem either using handcrafted features that are less influenced by the position of the smartphone or by building a position-aware HAR. The performance of these studies still needs more improvement to produce a reliable smartphone-based HAR. Thus, in this paper, we propose a deep convolution neural network model that provides a robust position-independent HAR system. We build and evaluate the performance of the proposed model using the RealWorld HAR public dataset. We find that our deep learning proposed model increases the overall performance compared to the state-of-the-art traditional machine learning method from 84% to 88% for position-independent HAR. In addition, the position detection performance of our model improves superiorly from 89% to 98%. Finally, the recognition time of the proposed model is evaluated in order to validate the applicability of the model for real-time applications. Full article
Figures

Figure 1

Open AccessArticle
Smartphone-Based Traveled Distance Estimation Using Individual Walking Patterns for Indoor Localization
Sensors 2018, 18(9), 3149; https://doi.org/10.3390/s18093149
Received: 13 August 2018 / Revised: 14 September 2018 / Accepted: 16 September 2018 / Published: 18 September 2018
Cited by 4 | PDF Full-text (2904 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a novel method for indoor localization with the user’s own smartphone by learning personalized walking patterns outdoors. Most smartphone and pedestrian dead reckoning (PDR)-based indoor localization studies have used an operation between step count and stride length to estimate the distance [...] Read more.
We introduce a novel method for indoor localization with the user’s own smartphone by learning personalized walking patterns outdoors. Most smartphone and pedestrian dead reckoning (PDR)-based indoor localization studies have used an operation between step count and stride length to estimate the distance traveled via generalized formulas based on the manually designed features of the measured sensory signal. In contrast, we have applied a different approach to learn the velocity of the pedestrian by using a segmented signal frame with our proposed hybrid multiscale convolutional and recurrent neural network model, and we estimate the distance traveled by computing the velocity and the moved time. We measured the inertial sensor and global position service (GPS) position at a synchronized time while walking outdoors with a reliable GPS fix, and we assigned the velocity as a label obtained from the displacement between the current position and a prior position to the corresponding signal frame. Our proposed real-time and automatic dataset construction method dramatically reduces the cost and significantly increases the efficiency of constructing a dataset. Moreover, our proposed deep learning model can be naturally applied to all kinds of time-series sensory signal processing. The performance was evaluated on an Android application (app) that exported the trained model and parameters. Our proposed method achieved a distance error of <2.4% and >1.5% on indoor experiments. Full article
Figures

Figure 1

Open AccessArticle
Smartwatch User Interface Implementation Using CNN-Based Gesture Pattern Recognition
Sensors 2018, 18(9), 2997; https://doi.org/10.3390/s18092997
Received: 25 July 2018 / Revised: 3 September 2018 / Accepted: 5 September 2018 / Published: 7 September 2018
Cited by 1 | PDF Full-text (2219 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, with an increase in the use of smartwatches among wearable devices, various applications for the device have been developed. However, the realization of a user interface is limited by the size and volume of the smartwatch. This study aims to [...] Read more.
In recent years, with an increase in the use of smartwatches among wearable devices, various applications for the device have been developed. However, the realization of a user interface is limited by the size and volume of the smartwatch. This study aims to propose a method to classify the user’s gestures without the need of an additional input device to improve the user interface. The smartwatch is equipped with an accelerometer, which collects the data and learns and classifies the gesture pattern using a machine learning algorithm. By incorporating the convolution neural network (CNN) model, the proposed pattern recognition system has become more accurate than the existing model. The performance analysis results show that the proposed pattern recognition system can classify 10 gesture patterns at an accuracy rate of 97.3%. Full article
Figures

Figure 1

Open AccessArticle
Research on Construction Workers’ Activity Recognition Based on Smartphone
Sensors 2018, 18(8), 2667; https://doi.org/10.3390/s18082667
Received: 11 July 2018 / Revised: 9 August 2018 / Accepted: 9 August 2018 / Published: 14 August 2018
Cited by 1 | PDF Full-text (3057 KB) | HTML Full-text | XML Full-text
Abstract
This research on identification and classification of construction workers’ activity contributes to the monitoring and management of individuals. Since a single sensor cannot meet management requirements of a complex construction environment, and integrated multiple sensors usually lack systemic flexibility and stability, this paper [...] Read more.
This research on identification and classification of construction workers’ activity contributes to the monitoring and management of individuals. Since a single sensor cannot meet management requirements of a complex construction environment, and integrated multiple sensors usually lack systemic flexibility and stability, this paper proposes an approach to construction-activity recognition based on smartphones. The accelerometers and gyroscopes embedded in smartphones were utilized to collect three-axis acceleration and angle data of eight main activities with relatively high frequency in simulated floor-reinforcing steel work. Data acquisition from multiple body parts enhanced the dimensionality of activity features to better distinguish between different activities. The CART algorithm of a decision tree was adopted to build a classification training model whose effectiveness was evaluated and verified through cross-validation. The results showed that the accuracy of classification for overall samples was up to 89.85% and the accuracy of prediction was 94.91%. The feasibility of using smartphones as data-acquisition tools in construction management was verified. Moreover, it was proved that the combination of a decision-tree algorithm with smartphones could achieve complex activity classification and identification. Full article
Figures

Figure 1

Open AccessArticle
Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering
Sensors 2018, 18(6), 1970; https://doi.org/10.3390/s18061970
Received: 17 May 2018 / Revised: 12 June 2018 / Accepted: 15 June 2018 / Published: 19 June 2018
Cited by 1 | PDF Full-text (8959 KB) | HTML Full-text | XML Full-text
Abstract
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. [...] Read more.
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF. Full article
Figures

Figure 1

Open AccessArticle
Impact of Sliding Window Length in Indoor Human Motion Modes and Pose Pattern Recognition Based on Smartphone Sensors
Sensors 2018, 18(6), 1965; https://doi.org/10.3390/s18061965
Received: 7 May 2018 / Revised: 14 June 2018 / Accepted: 15 June 2018 / Published: 18 June 2018
Cited by 2 | PDF Full-text (10222 KB) | HTML Full-text | XML Full-text
Abstract
Human activity recognition (HAR) is essential for understanding people’s habits and behaviors, providing an important data source for precise marketing and research in psychology and sociology. Different approaches have been proposed and applied to HAR. Data segmentation using a sliding window is a [...] Read more.
Human activity recognition (HAR) is essential for understanding people’s habits and behaviors, providing an important data source for precise marketing and research in psychology and sociology. Different approaches have been proposed and applied to HAR. Data segmentation using a sliding window is a basic step during the HAR procedure, wherein the window length directly affects recognition performance. However, the window length is generally randomly selected without systematic study. In this study, we examined the impact of window length on smartphone sensor-based human motion and pose pattern recognition. With data collected from smartphone sensors, we tested a range of window lengths on five popular machine-learning methods: decision tree, support vector machine, K-nearest neighbor, Gaussian naïve Bayesian, and adaptive boosting. From the results, we provide recommendations for choosing the appropriate window length. Results corroborate that the influence of window length on the recognition of motion modes is significant but largely limited to pose pattern recognition. For motion mode recognition, a window length between 2.5–3.5 s can provide an optimal tradeoff between recognition performance and speed. Adaptive boosting outperformed the other methods. For pose pattern recognition, 0.5 s was enough to obtain a satisfactory result. In addition, all of the tested methods performed well. Full article
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top