sensors-logo

Journal Browser

Journal Browser

Application for Assistive Technologies and Wearable Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 17064

Special Issue Editors


E-Mail Website
Guest Editor
School of Engineering, Deakin University, 3217 Melbourne, Australia
Interests: Robotics

E-Mail Website
Guest Editor
School of Engineering and Technology, Deakin University, Locked Bag 20000, Geelong, VIC 3220, Australia
Interests: assitive robotics; biomedical devices; sensor networks; human motion capture; machine learning; wireless communications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, USA
Interests: Intro to Cloud computing; Programming Techniques for Cloud Computing; Analysis and Design of Control Systems

Special Issue Information

Dear Colleagues,

As more wearable technology is developed with advances in material science, microelectronics, and sensing technologies, leading to various form of data at increasing rates, the concurrent development of sophisticated and relevant data-processing and machine-learning technologies has become imperative. With a growing population and aging society, these devices will become more prolific, sophisticated, and user friendly for the less-technology-savvy user.

At the same time, more assistive technologies for the infirm and aged are emerging as pervasive computing applications. These technologies are becoming part of a large Internet of Things with increased attention to security and privacy. This Special Issue aims to look at modern wearable sensors and technologies and examine their interactions with each other and with humankind.

Prof. Dr. Matthew Joordens
Dr. Pubudu N. Pathirana
Prof. Dr. Jeff Prevost
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • wearable sensors
  • data analysis
  • assistive technology
  • IoT
  • electronic design

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 20489 KiB  
Article
Adversarial Autoencoder and Multi-Armed Bandit for Dynamic Difficulty Adjustment in Immersive Virtual Reality for Rehabilitation: Application to Hand Movement
by Kenta Kamikokuryo, Takumi Haga, Gentiane Venture and Vincent Hernandez
Sensors 2022, 22(12), 4499; https://doi.org/10.3390/s22124499 - 14 Jun 2022
Cited by 5 | Viewed by 2102
Abstract
Motor rehabilitation is used to improve motor control skills to improve the patient’s quality of life. Regular adjustments based on the effect of therapy are necessary, but this can be time-consuming for the clinician. This study proposes to use an efficient tool for [...] Read more.
Motor rehabilitation is used to improve motor control skills to improve the patient’s quality of life. Regular adjustments based on the effect of therapy are necessary, but this can be time-consuming for the clinician. This study proposes to use an efficient tool for high-dimensional data by considering a deep learning approach for dimensionality reduction of hand movement recorded using a wireless remote control embedded with the Oculus Rift S. This latent space is created as a visualization tool also for use in a reinforcement learning (RL) algorithm employed to provide a decision-making framework. The data collected consists of motions drawn with wireless remote control in an immersive VR environment for six different motions called “Cube”, “Cylinder”, “Heart”, “Infinity”, “Sphere”, and “Triangle”. From these collected data, different artificial databases were created to simulate variations of the data. A latent space representation is created using an adversarial autoencoder (AAE), taking into account unsupervised (UAAE) and semi-supervised (SSAAE) training. Then, each test point is represented by a distance metric and used as a reward for two classes of Multi-Armed Bandit (MAB) algorithms, namely Boltzmann and Sibling Kalman filters. The results showed that AAE models can represent high-dimensional data in a two-dimensional latent space and that MAB agents can efficiently and quickly learn the distance evolution in the latent space. The results show that Sibling Kalman filter exploration outperforms Boltzmann exploration with an average cumulative weighted probability error of 7.9 versus 19.9 using the UAAE latent space representation and 8.0 versus 20.0 using SSAAE. In conclusion, this approach provides an effective approach to visualize and track current motor control capabilities regarding a target in order to reflect the patient’s abilities in VR games in the context of DDA. Full article
(This article belongs to the Special Issue Application for Assistive Technologies and Wearable Sensors)
Show Figures

Figure 1

15 pages, 2679 KiB  
Article
A Smart Multi-Sensor Device to Detect Distress in Swimmers
by Salman Jalalifar, Afsaneh Kashizadeh, Ishmam Mahmood, Andrew Belford, Nicolle Drake, Amir Razmjou and Mohsen Asadnia
Sensors 2022, 22(3), 1059; https://doi.org/10.3390/s22031059 - 29 Jan 2022
Cited by 12 | Viewed by 5519
Abstract
Drowning is considered amongst the top 10 causes of unintentional death, according to the World Health Organization (WHO). Therefore, anti-drowning systems that can save lives by preventing and detecting drowning are much needed. This paper proposes a robust and waterproof sensor-based device to [...] Read more.
Drowning is considered amongst the top 10 causes of unintentional death, according to the World Health Organization (WHO). Therefore, anti-drowning systems that can save lives by preventing and detecting drowning are much needed. This paper proposes a robust and waterproof sensor-based device to detect distress in swimmers at varying depths and different types of water environments. The proposed device comprises four main components, including heart rate, blood oxygen level, movement, and depth sensors. Although these sensors were designed to work together to boost the system’s capability as an anti-drowning device, each could operate independently. The sensors were able to determine the heart rate to an accuracy of 1 beat per minute (BPM), 1% SpO2, the acceleration with adjustable sensitivities of ±2 g, ±4 g, ±8 g, and ±16 g, and the depth up to 12.8 m. The data obtained from the sensors were sent to a microcontroller that compared the input data to adjustable threshold values to detect dangerous situations. Being in hazardous situations for more than a specific time activated the alarming system. Based on the comparison made in the program and measuring the time of submersion, a message indicating drowning or safe was sent to a lifeguard to continuously monitor the swimmer’ condition via Wi-Fi to an IP address reachable by a mobile phone or laptop. It is also possible to continuously monitor the sensor outputs on the device’s display or the connected mobile phone or laptop. The threshold values could be adjusted based on biometric parameters such as swimming conditions (swimming pool, beach, depth, etc.) and swimmers health and conditions. The functionality of the proposed device was thoroughly tested over a wide range of parameters and under different conditions, both in air and underwater. It was demonstrated that the device could detect a range of potentially hazardous aquatic situations. This work will pave the way for developing an effective drowning sensing system that could save tens of thousands of lives across the globe every year. Full article
(This article belongs to the Special Issue Application for Assistive Technologies and Wearable Sensors)
Show Figures

Figure 1

16 pages, 1705 KiB  
Article
Predicting Knee Joint Kinematics from Wearable Sensor Data in People with Knee Osteoarthritis and Clinical Considerations for Future Machine Learning Models
by Jay-Shian Tan, Sawitchaya Tippaya, Tara Binnie, Paul Davey, Kathryn Napier, J. P. Caneiro, Peter Kent, Anne Smith, Peter O’Sullivan and Amity Campbell
Sensors 2022, 22(2), 446; https://doi.org/10.3390/s22020446 - 07 Jan 2022
Cited by 22 | Viewed by 4271
Abstract
Deep learning models developed to predict knee joint kinematics are usually trained on inertial measurement unit (IMU) data from healthy people and only for the activity of walking. Yet, people with knee osteoarthritis have difficulties with other activities and there are a lack [...] Read more.
Deep learning models developed to predict knee joint kinematics are usually trained on inertial measurement unit (IMU) data from healthy people and only for the activity of walking. Yet, people with knee osteoarthritis have difficulties with other activities and there are a lack of studies using IMU training data from this population. Our objective was to conduct a proof-of-concept study to determine the feasibility of using IMU training data from people with knee osteoarthritis performing multiple clinically important activities to predict knee joint sagittal plane kinematics using a deep learning approach. We trained a bidirectional long short-term memory model on IMU data from 17 participants with knee osteoarthritis to estimate knee joint flexion kinematics for phases of walking, transitioning to and from a chair, and negotiating stairs. We tested two models, a double-leg model (four IMUs) and a single-leg model (two IMUs). The single-leg model demonstrated less prediction error compared to the double-leg model. Across the different activity phases, RMSE (SD) ranged from 7.04° (2.6) to 11.78° (6.04), MAE (SD) from 5.99° (2.34) to 10.37° (5.44), and Pearson’s R from 0.85 to 0.99 using leave-one-subject-out cross-validation. This study demonstrates the feasibility of using IMU training data from people who have knee osteoarthritis for the prediction of kinematics for multiple clinically relevant activities. Full article
(This article belongs to the Special Issue Application for Assistive Technologies and Wearable Sensors)
Show Figures

Figure 1

16 pages, 3247 KiB  
Article
Human Behavior Recognition Model Based on Feature and Classifier Selection
by Ge Gao, Zhixin Li, Zhan Huan, Ying Chen, Jiuzhen Liang, Bangwen Zhou and Chenhui Dong
Sensors 2021, 21(23), 7791; https://doi.org/10.3390/s21237791 - 23 Nov 2021
Cited by 20 | Viewed by 2747
Abstract
With the rapid development of the computer and sensor field, inertial sensor data have been widely used in human activity recognition. At present, most relevant studies divide human activities into basic actions and transitional actions, in which basic actions are classified by unified [...] Read more.
With the rapid development of the computer and sensor field, inertial sensor data have been widely used in human activity recognition. At present, most relevant studies divide human activities into basic actions and transitional actions, in which basic actions are classified by unified features, while transitional actions usually use context information to determine the category. For the existing single method that cannot well realize human activity recognition, this paper proposes a human activity classification and recognition model based on smartphone inertial sensor data. The model fully considers the feature differences of different properties of actions, uses a fixed sliding window to segment the human activity data of inertial sensors with different attributes and, finally, extracts the features and recognizes them on different classifiers. The experimental results show that dynamic and transitional actions could obtain the best recognition performance on support vector machines, while static actions could obtain better classification effects on ensemble classifiers; as for feature selection, the frequency-domain feature used in dynamic action had a high recognition rate, up to 99.35%. When time-domain features were used for static and transitional actions, higher recognition rates were obtained, 98.40% and 91.98%, respectively. Full article
(This article belongs to the Special Issue Application for Assistive Technologies and Wearable Sensors)
Show Figures

Figure 1

14 pages, 2030 KiB  
Article
Identification of Brain Electrical Activity Related to Head Yaw Rotations
by Enrico Zero, Chiara Bersani and Roberto Sacile
Sensors 2021, 21(10), 3345; https://doi.org/10.3390/s21103345 - 11 May 2021
Cited by 4 | Viewed by 1611
Abstract
Automatizing the identification of human brain stimuli during head movements could lead towards a significant step forward for human computer interaction (HCI), with important applications for severely impaired people and for robotics. In this paper, a neural network-based identification technique is presented to [...] Read more.
Automatizing the identification of human brain stimuli during head movements could lead towards a significant step forward for human computer interaction (HCI), with important applications for severely impaired people and for robotics. In this paper, a neural network-based identification technique is presented to recognize, by EEG signals, the participant’s head yaw rotations when they are subjected to visual stimulus. The goal is to identify an input-output function between the brain electrical activity and the head movement triggered by switching on/off a light on the participant’s left/right hand side. This identification process is based on “Levenberg–Marquardt” backpropagation algorithm. The results obtained on ten participants, spanning more than two hours of experiments, show the ability of the proposed approach in identifying the brain electrical stimulus associate with head turning. A first analysis is computed to the EEG signals associated to each experiment for each participant. The accuracy of prediction is demonstrated by a significant correlation between training and test trials of the same file, which, in the best case, reaches value r = 0.98 with MSE = 0.02. In a second analysis, the input output function trained on the EEG signals of one participant is tested on the EEG signals by other participants. In this case, the low correlation coefficient values demonstrated that the classifier performances decreases when it is trained and tested on different subjects. Full article
(This article belongs to the Special Issue Application for Assistive Technologies and Wearable Sensors)
Show Figures

Figure 1

Back to TopTop