sensors-logo

Journal Browser

Journal Browser

Special Issue "Wearable Sensor for Activity Analysis and Context Recognition"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 31 May 2021.

Special Issue Editors

Prof. Dr. Samer Mohammed
Website
Guest Editor
Laboratory of Image Signal and Intelligent Systems, Department of Network & Telecom, University of Paris-Est Créteil, 94000 Créteil, France
Interests: wearable robots; wearable sensors; human activity recognition and classification; movement analysis, modelling, and characterization; robot control
Prof. Dr. Jian Huang
Website
Guest Editor
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: robot control; rehabilitation robbots; wearable robots; underactuted robotics; mobile robots; robotic manipulation
Prof. Dr. Ravi Vaidyanathan
Website
Guest Editor
Imperial College London, Department of Mechanical Engineering, City and Guilds Building, Rm 717, South Kensington Campus, London, SW7 1AL, UK
Interests: biomechatronics; wearable systems; instrumentation; assistive technology; signal fusion; biorobotics

Special Issue Information

Dear Colleagues,

Active participation of the dependent population in society has become an important challenge from both societal and economic viewpoints as this population is constantly increasing. Assisting with the daily activities would enhance personal safety, well-being, and autonomy while reducing health care costs. The dependent population requires continuous monitoring to detect abnormal situations or prevent unpredictable events such as falls. Thus, the problem of human activity recognition is central for understanding and predicting human behavior, in particular to provide assistive services to humans, such as health monitoring, well-being, security, etc. The last decade has shown an increasing interest in the development of wearable technologies for physical and cognitive assistance and rehabilitation purposes. For instance, the rapid development of microsystems technology has contributed to the development of small, lightweight, and inexpensive wearable sensors. This has provided users with a means to improve early stage detection of pathologies while reducing the overall costs compared with more intrusive standard diagnostic methods. Recent advances in the fields of machine learning and deep learning technologies have opened new and exciting research paradigms to construct end-to-end learning models from complex data in the health care domain. These new learning techniques can be also used for translating wearable biomedical data into improved human health.

Despite this vast potential, the majority of the wearables today remain limited to simple metrics (e.g., step counts, heart rate, calories, etc.); detailed health and/or physiological instrumentation for machine interface have yet been implemented. A staggering one-third of users are reported to abandon commercial devices in regular use, which indicates transience and sustainability. Sensor development, embedded systems, and cloud connectivity enable an evolution from a device to a systems perspective, which demands recognition that sensors, learning algorithms, and devices linking wearables to humans (e.g. robotic assist) are fundamentally coupled, and hence should be treated as an integrated whole.

The Special Issue seeks to publish original investigations aimed at closing this gap. We invite papers presenting significant advances with respect to the state-of-the-art development in the following topics, which include, but are not limited to:

- Daily living activity recognition using wearable sensors;

- Learning techniques for health care application using wearable devices;

- Human assistance using smart spaces;

- Design and control of wearable robot for health care applications;

- Human-in-the-loop-optimization algorithms for assistive purposes using wearable devices;

- Neuro-robotics paradigm for wearable assistive technologies;

- Recent development and trends in wearable clinical rehabilitation techniques;

- Motion control and fall detection using wearable devices;

- Ethical, legal, and social issues of wearable devices.

Prof. Dr. Samer Mohammed
Prof. Dr. Jian Huang
Prof. Dr. Ravi Vaidyanathan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Powered Two-Wheeler Riding Profile Clustering for an In-Depth Study of Bend-Taking Practices
Sensors 2020, 20(22), 6696; https://doi.org/10.3390/s20226696 - 23 Nov 2020
Abstract
The understanding of rider/vehicle interaction modalities remains an issue, specifically in the case of bend-taking. This difficulty results both from the lack of adequate instrumentation to conduct this type of study and from the variety of practices of this population of road users. [...] Read more.
The understanding of rider/vehicle interaction modalities remains an issue, specifically in the case of bend-taking. This difficulty results both from the lack of adequate instrumentation to conduct this type of study and from the variety of practices of this population of road users. Riders have numerous explanations of strategies for controlling their motorcycles when taking bends. The objective of this paper is to develop a data-driven methodology in order to identify typical riding behaviors in bends by using clustering methods. The real dataset used for the experiments is collected within the VIROLO++ collaborative project to improve the knowledge of actual PTW riding practices, especially during bend taking, by collecting real data on this riding situation, including data on PTW dynamics (velocity, normal acceleration, and jerk), position on the road (road curvature), and handlebar actions (handlebar steering angle). A detailed analysis of the results is provided for both the Anderson–Darling test and clustering steps. Moreover, the clustering results are compared with the subjective data of subjects to highlight and contextualize typical riding tendencies. Finally, we perform an in-depth analysis of the bend-taking practices of one subject to highlight the differences between different methods of controlling the motorcycle (steering handlebar vs. rider’s lean) using the rider action measurements made by pressure sensors. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

Open AccessArticle
A Multimodal Intention Detection Sensor Suite for Shared Autonomy of Upper-Limb Robotic Prostheses
Sensors 2020, 20(21), 6097; https://doi.org/10.3390/s20216097 - 27 Oct 2020
Cited by 2
Abstract
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI [...] Read more.
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

Open AccessArticle
Reinforcement Learning Based Fast Self-Recalibrating Decoder for Intracortical Brain–Machine Interface
Sensors 2020, 20(19), 5528; https://doi.org/10.3390/s20195528 - 27 Sep 2020
Abstract
Background: For the nonstationarity of neural recordings in intracortical brain–machine interfaces, daily retraining in a supervised manner is always required to maintain the performance of the decoder. This problem can be improved by using a reinforcement learning (RL) based self-recalibrating decoder. However, quickly [...] Read more.
Background: For the nonstationarity of neural recordings in intracortical brain–machine interfaces, daily retraining in a supervised manner is always required to maintain the performance of the decoder. This problem can be improved by using a reinforcement learning (RL) based self-recalibrating decoder. However, quickly exploring new knowledge while maintaining a good performance remains a challenge in RL-based decoders. Methods: To solve this problem, we proposed an attention-gated RL-based algorithm combining transfer learning, mini-batch, and weight updating schemes to accelerate the weight updating and avoid over-fitting. The proposed algorithm was tested on intracortical neural data recorded from two monkeys to decode their reaching positions and grasping gestures. Results: The decoding results showed that our proposed algorithm achieved an approximate 20% increase in classification accuracy compared to that obtained by the non-retrained classifier and even achieved better classification accuracy than the daily retraining classifier. Moreover, compared with a conventional RL method, our algorithm improved the accuracy by approximately 10% and the online weight updating speed by approximately 70 times. Conclusions: This paper proposed a self-recalibrating decoder which achieved a good and robust decoding performance with fast weight updating and might facilitate its application in wearable device and clinical practice. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

Open AccessArticle
An Ultra-Sensitive Modular Hybrid EMG–FMG Sensor with Floating Electrodes
Sensors 2020, 20(17), 4775; https://doi.org/10.3390/s20174775 - 24 Aug 2020
Abstract
To improve the reliability and safety of myoelectric prosthetic control, many researchers tend to use multi-modal signals. The combination of electromyography (EMG) and forcemyography (FMG) has been proved to be a practical choice. However, an integrative and compact design of this hybrid sensor [...] Read more.
To improve the reliability and safety of myoelectric prosthetic control, many researchers tend to use multi-modal signals. The combination of electromyography (EMG) and forcemyography (FMG) has been proved to be a practical choice. However, an integrative and compact design of this hybrid sensor is lacking. This paper presents a novel modular EMG–FMG sensor; the sensing module has a novel design that consists of floating electrodes, which act as the sensing probe of both the EMG and FMG. This design improves the integration of the sensor. The whole system contains one data acquisition unit and eight identical sensor modules. Experiments were conducted to evaluate the performance of the sensor system. The results show that the EMG and FMG signals have good consistency under standard conditions; the FMG signal shows a better and more robust performance than the EMG. The average accuracy is 99.07% while using both the EMG and FMG signals for recognition of six hand gestures under standard conditions. Even with two layers of gauze isolated between the sensor and the skin, the average accuracy reaches 90.9% while using only the EMG signal; if we use both the EMG and FMG signals for classification, the average accuracy is 99.42%. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

Open AccessArticle
Hardware/Software Co-Design of Fractal Features Based Fall Detection System
Sensors 2020, 20(8), 2322; https://doi.org/10.3390/s20082322 - 18 Apr 2020
Abstract
Falls are a leading cause of death in older adults and result in high levels of mortality, morbidity and immobility. Fall Detection Systems (FDS) are imperative for timely medical aid and have been known to reduce death rate by 80%. We propose a [...] Read more.
Falls are a leading cause of death in older adults and result in high levels of mortality, morbidity and immobility. Fall Detection Systems (FDS) are imperative for timely medical aid and have been known to reduce death rate by 80%. We propose a novel wearable sensor FDS which exploits fractal dynamics of fall accelerometer signals. Fractal dynamics can be used as an irregularity measure of signals and our work shows that it is a key discriminant for classification of falls from other activities of life. We design, implement and evaluate a hardware feature accelerator for computation of fractal features through multi-level wavelet transform on a reconfigurable embedded System on Chip, Zynq device for evaluating wearable accelerometer sensors. The proposed FDS utilises a hardware/software co-design approach with hardware accelerator for fractal features and software implementation of Linear Discriminant Analysis on an embedded ARM core for high accuracy and energy efficiency. The proposed system achieves 99.38% fall detection accuracy, 7.3× speed-up and 6.53× improvements in power consumption, compared to the software only execution with an overall performance per Watt advantage of 47.6×, while consuming low reconfigurable resources at 28.67%. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

Back to TopTop