Next Article in Journal
An Ethanol Vapor Sensor Based on a Microfiber with a Quantum-Dot Gel Coating
Next Article in Special Issue
Comparison of Standard Clinical and Instrumented Physical Performance Tests in Discriminating Functional Status of High-Functioning People Aged 61–70 Years Old
Previous Article in Journal
An Analog-Front ROIC with On-Chip Non-Uniformity Compensation for Diode-Based Infrared Image Sensors
Previous Article in Special Issue
A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition
Open AccessArticle

Navigating Virtual Environments Using Leg Poses and Smartphone Sensors

1
Information Technology Department, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Computer Science and Electrical Engineering, Singidunum University, 11000 Belgrade, Serbia
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(2), 299; https://doi.org/10.3390/s19020299
Received: 1 November 2018 / Revised: 3 January 2019 / Accepted: 10 January 2019 / Published: 13 January 2019
Realization of navigation in virtual environments remains a challenge as it involves complex operating conditions. Decomposition of such complexity is attainable by fusion of sensors and machine learning techniques. Identifying the right combination of sensory information and the appropriate machine learning technique is a vital ingredient for translating physical actions to virtual movements. The contributions of our work include: (i) Synchronization of actions and movements using suitable multiple sensor units, and (ii) selection of the significant features and an appropriate algorithm to process them. This work proposes an innovative approach that allows users to move in virtual environments by simply moving their legs towards the desired direction. The necessary hardware includes only a smartphone that is strapped to the subjects’ lower leg. Data from the gyroscope, accelerometer and campus sensors of the mobile device are transmitted to a PC where the movement is accurately identified using a combination of machine learning techniques. Once the desired movement is identified, the movement of the virtual avatar in the virtual environment is realized. After pre-processing the sensor data using the box plot outliers approach, it is observed that Artificial Neural Networks provided the highest movement identification accuracy of 84.2% on the training dataset and 84.1% on testing dataset. View Full-Text
Keywords: virtual reality; mobile sensors; machine learning; feature selection; movement identification virtual reality; mobile sensors; machine learning; feature selection; movement identification
Show Figures

Figure 1

MDPI and ACS Style

Tsaramirsis, G.; Buhari, S.M.; Basheri, M.; Stojmenovic, M. Navigating Virtual Environments Using Leg Poses and Smartphone Sensors. Sensors 2019, 19, 299.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop