Next Article in Journal
Enhanced Spatial and Extended Temporal Graph Convolutional Network for Skeleton-Based Action Recognition
Next Article in Special Issue
An Evaluation of Three Kinematic Methods for Gait Event Detection Compared to the Kinetic-Based ‘Gold Standard’
Previous Article in Journal
A Secure Video Steganography Based on the Intra-Prediction Mode (IPM) for H264
Previous Article in Special Issue
Shedding Light on Nocturnal Movements in Parkinson’s Disease: Evidence from Wearable Technologies
Open AccessArticle

VI-Net—View-Invariant Quality of Human Movement Assessment

Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK
Université de Toulon, Aix Marseille Univ, CNRS, LIS, Marseille, France
Author to whom correspondence should be addressed.
Sensors 2020, 20(18), 5258;
Received: 11 August 2020 / Revised: 5 September 2020 / Accepted: 9 September 2020 / Published: 15 September 2020
(This article belongs to the Special Issue Sensor-Based Systems for Kinematics and Kinetics)
We propose a view-invariant method towards the assessment of the quality of human movements which does not rely on skeleton data. Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pre-trained 2D convolutional neural network (CNN) (e.g., VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. We release the only publicly-available, multi-view, non-skeleton, non-mocap, rehabilitation movement dataset (QMAR), and provide results for both cross-subject and cross-view scenarios on this dataset. We show that VI-Net achieves average rank correlation of 0.66 on cross-subject and 0.65 on unseen views when trained on only two views. We also evaluate the proposed method on the single-view rehabilitation dataset KIMORE and obtain 0.66 rank correlation against a baseline of 0.62. View Full-Text
Keywords: movement analysis; view-invariant convolutional neural network (CNN); health monitoring movement analysis; view-invariant convolutional neural network (CNN); health monitoring
Show Figures

Figure 1

MDPI and ACS Style

Sardari, F.; Paiement, A.; Hannuna, S.; Mirmehdi, M. VI-Net—View-Invariant Quality of Human Movement Assessment. Sensors 2020, 20, 5258.

AMA Style

Sardari F, Paiement A, Hannuna S, Mirmehdi M. VI-Net—View-Invariant Quality of Human Movement Assessment. Sensors. 2020; 20(18):5258.

Chicago/Turabian Style

Sardari, Faegheh; Paiement, Adeline; Hannuna, Sion; Mirmehdi, Majid. 2020. "VI-Net—View-Invariant Quality of Human Movement Assessment" Sensors 20, no. 18: 5258.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Search more from Scilit
Back to TopTop