Special Issue "Human Activity Recognition"

A special issue of Applied Sciences (ISSN 2076-3417).

Deadline for manuscript submissions: closed (20 February 2017).

Special Issue Editors

Prof. Dr. Plamen Angelov
Website
Guest Editor
Computing and Communications Department. Lancaster University, UK
Interests: intelligent systems; computational intelligence; fuzzy systems; machine learning
Special Issues and Collections in MDPI journals
Dr. Jose Antonio Iglesias Martinez
Website
Guest Editor
Computer Science and Engineering Department. Universidad Carlos III de Madrid, Spain
Interests: artificial intelligence; RoboCup and soccer robots; software agents; machine learning and robotics
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

To make good decisions in a social context, humans often need to recognize the plan underlying the behaviour of others, and make predictions based on this recognition. By recognizing the behaviour of others, many different tasks can be performed, such as to predict their future behaviour, to coordinate with them or to assist them. If this recognition of high-level activities—which are normally composed of multiple simple actions of persons—can be done automatically, it can be very useful in many applications.
Human activity recognition is a challenging and active research area which is applied in many different areas such as surveillance systems, elderly monitoring, assistive technology, motion analysis, social sciences, virtual reality or those systems which interact between persons and electronic devices.
This Special Issue aims to review and introduce the latest research progress in the field of human activity recognition. We encourage submissions of conceptual, empirical and literature review papers focus on this field. The different types and approaches of human activity recognition are welcome.

Dr. José Antonio Iglesias Martínez
Prof. Dr. Plamen Angelov
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Activity Recognition
  • User Behaviour Modelling
  • Pattern Behaviour Recognition
  • Elderly monitoring
  • Human Activity Recognition based on Computer Vision

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessFeature PaperArticle
Real-Time Recognition of Calling Pattern and Behaviour of Mobile Phone Users through Anomaly Detection and Dynamically-Evolving Clustering
Appl. Sci. 2017, 7(8), 798; https://doi.org/10.3390/app7080798 - 05 Aug 2017
Cited by 6
Abstract
In the competitive telecommunications market, the information that the mobile telecom operators can obtain by regularly analysing their massive stored call logs, is of great interest. Although the data that can be extracted nowadays from mobile phones have been enriched with much information, [...] Read more.
In the competitive telecommunications market, the information that the mobile telecom operators can obtain by regularly analysing their massive stored call logs, is of great interest. Although the data that can be extracted nowadays from mobile phones have been enriched with much information, the data solely from the call logs can give us vital information about the customers. This information is usually related with the calling behaviour of their customers and it can be used to manage them. However, the analysis of these data is normally very complex because of the vast data stream to analyse. Thus, efficient data mining techniques need to be used for this purpose. In this paper, a novel approach to analyse call detail records (CDR) is proposed, with the main goal to extract and cluster different calling patterns or behaviours, and to detect outliers. The main novelty of this approach is that it works in real-time using an evolving and recursive framework. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Figure 1

Open AccessArticle
A New Framework of Human Interaction Recognition Based on Multiple Stage Probability Fusion
Appl. Sci. 2017, 7(6), 567; https://doi.org/10.3390/app7060567 - 01 Jun 2017
Cited by 7
Abstract
Visual-based human interactive behavior recognition is a challenging research topic in computer vision. There exist some important problems in the current interaction recognition algorithms, such as very complex feature representation and inaccurate feature extraction induced by wrong human body segmentation. In order to [...] Read more.
Visual-based human interactive behavior recognition is a challenging research topic in computer vision. There exist some important problems in the current interaction recognition algorithms, such as very complex feature representation and inaccurate feature extraction induced by wrong human body segmentation. In order to solve these problems, a novel human interaction recognition method based on multiple stage probability fusion is proposed in this paper. According to the human body’s contact in interaction as a cut-off point, the process of the interaction can be divided into three stages: start stage, execution stage and end stage. Two persons’ motions are respectively extracted and recognizes in the start stage and the finish stage when there is no contact between those persons. The two persons’ motion is extracted as a whole and recognized in the execution stage. In the recognition process, the final recognition results are obtained by the weighted fusing these probabilities in different stages. The proposed method not only simplifies the extraction and representation of features, but also avoids the wrong feature extraction caused by occlusion. Experiment results on the UT-interaction dataset demonstrated that the proposed method results in a better performance than other recent interaction recognition methods. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Figure 1

Open AccessArticle
Multiple Sensors Based Hand Motion Recognition Using Adaptive Directed Acyclic Graph
Appl. Sci. 2017, 7(4), 358; https://doi.org/10.3390/app7040358 - 05 Apr 2017
Cited by 15
Abstract
The use of human hand motions as an effective way to interact with computers/robots, robot manipulation learning and prosthetic hand control is being researched in-depth. This paper proposes a novel and effective multiple sensor based hand motion capture and recognition system. Ten common [...] Read more.
The use of human hand motions as an effective way to interact with computers/robots, robot manipulation learning and prosthetic hand control is being researched in-depth. This paper proposes a novel and effective multiple sensor based hand motion capture and recognition system. Ten common predefined object grasp and manipulation tasks demonstrated by different subjects are recorded from both the human hand and object points of view. Three types of sensors, including electromyography, data glove and FingerTPS are applied to simultaneously capture the EMG signals, the finger angle trajectories, and the contact force. Recognising different grasp and manipulation tasks based on the combined signals is investigated by using an adaptive directed acyclic graph algorithm, and results of comparative experiments show the proposed system with a higher recognition rate compared with individual sensing technology, as well as other algorithms. The proposed framework contains abundant information from multimodal human hand motions with the multiple sensor techniques, and it is potentially applicable to applications in prosthetic hand control and artificial systems performing autonomous dexterous manipulation. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Figure 1

Open AccessArticle
Tangible User Interface and Mu Rhythm Suppression: The Effect of User Interface on the Brain Activity in Its Operator and Observer
Appl. Sci. 2017, 7(4), 347; https://doi.org/10.3390/app7040347 - 31 Mar 2017
Cited by 2
Abstract
The intuitiveness of tangible user interface (TUI) is not only for its operator. It is quite possible that this type of user interface (UI) can also have an effect on the experience and learning of observers who are just watching the operator using [...] Read more.
The intuitiveness of tangible user interface (TUI) is not only for its operator. It is quite possible that this type of user interface (UI) can also have an effect on the experience and learning of observers who are just watching the operator using it. To understand the possible effect of TUI, the present study focused on the mu rhythm suppression in the sensorimotor area reflecting execution and observation of action, and investigated the brain activity both in its operator and observer. In the observer experiment, the effect of TUI on its observers was demonstrated through the brain activity. Although the effect of the grasping action itself was uncertain, the unpredictability of the result of the action seemed to have some effect on the mirror neuron system (MNS)-related brain activity. In the operator experiment, in spite of the same grasping action, the brain activity was activated in the sensorimotor area when UI functions were included (TUI). Such activation of the brain activity was not found with a graphical user interface (GUI) that has UI functions without grasping action. These results suggest that the MNS-related brain activity is involved in the effect of TUI, indicating the possibility of UI evaluation based on brain activity. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Figure 1

Open AccessArticle
Fall Detection for Elderly from Partially Observed Depth-Map Video Sequences Based on View-Invariant Human Activity Representation
Appl. Sci. 2017, 7(4), 316; https://doi.org/10.3390/app7040316 - 24 Mar 2017
Cited by 20
Abstract
This paper presents a new approach for fall detection from partially-observed depth-map video sequences. The proposed approach utilizes the 3D skeletal joint positions obtained from the Microsoft Kinect sensor to build a view-invariant descriptor for human activity representation, called the motion-pose geometric descriptor [...] Read more.
This paper presents a new approach for fall detection from partially-observed depth-map video sequences. The proposed approach utilizes the 3D skeletal joint positions obtained from the Microsoft Kinect sensor to build a view-invariant descriptor for human activity representation, called the motion-pose geometric descriptor (MPGD). Furthermore, we have developed a histogram-based representation (HBR) based on the MPGD to construct a length-independent representation of the observed video subsequences. Using the constructed HBR, we formulate the fall detection problem as a posterior-maximization problem in which the posteriori probability for each observed video subsequence is estimated using a multi-class SVM (support vector machine) classifier. Then, we combine the computed posteriori probabilities from all of the observed subsequences to obtain an overall class posteriori probability of the entire partially-observed depth-map video sequence. To evaluate the performance of the proposed approach, we have utilized the Kinect sensor to record a dataset of depth-map video sequences that simulates four fall-related activities of elderly people, including: walking, sitting, falling form standing and falling from sitting. Then, using the collected dataset, we have developed three evaluation scenarios based on the number of unobserved video subsequences in the testing videos, including: fully-observed video sequence scenario, single unobserved video subsequence of random lengths scenarios and two unobserved video subsequences of random lengths scenarios. Experimental results show that the proposed approach achieved an average recognition accuracy of 93 . 6 % , 77 . 6 % and 65 . 1 % , in recognizing the activities during the first, second and third evaluation scenario, respectively. These results demonstrate the feasibility of the proposed approach to detect falls from partially-observed videos. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Figure 1

Open AccessArticle
DeepGait: A Learning Deep Convolutional Representation for View-Invariant Gait Recognition Using Joint Bayesian
Appl. Sci. 2017, 7(3), 210; https://doi.org/10.3390/app7030210 - 23 Feb 2017
Cited by 36
Abstract
Human gait, as a soft biometric, helps to recognize people through their walking. To further improve the recognition performance, we propose a novel video sensor-based gait representation, DeepGait, using deep convolutional features and introduce Joint Bayesian to model view variance. DeepGait is generated [...] Read more.
Human gait, as a soft biometric, helps to recognize people through their walking. To further improve the recognition performance, we propose a novel video sensor-based gait representation, DeepGait, using deep convolutional features and introduce Joint Bayesian to model view variance. DeepGait is generated by using a pre-trained “very deep” network “D-Net” (VGG-D) without any fine-tuning. For non-view setting, DeepGait outperforms hand-crafted representations (e.g., Gait Energy Image, Frequency-Domain Feature and Gait Flow Image, etc.). Furthermore, for cross-view setting, 256-dimensional DeepGait after PCA significantly outperforms the state-of-the-art methods on the OU-ISR large population (OULP) dataset. The OULP dataset, which includes 4007 subjects, makes our result reliable in a statistically reliable way. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Figure 1

Open AccessArticle
Device-Free Indoor Activity Recognition System
Appl. Sci. 2016, 6(11), 329; https://doi.org/10.3390/app6110329 - 01 Nov 2016
Cited by 13
Abstract
In this paper, we explore the properties of the Channel State Information (CSI) of WiFi signals and present a device-free indoor activity recognition system. Our proposed system uses only one ubiquitous router access point and a laptop as a detection point, while the [...] Read more.
In this paper, we explore the properties of the Channel State Information (CSI) of WiFi signals and present a device-free indoor activity recognition system. Our proposed system uses only one ubiquitous router access point and a laptop as a detection point, while the user is free and neither needs to wear sensors nor carry devices. The proposed system recognizes six daily activities, such as walk, crawl, fall, stand, sit, and lie. We have built the prototype with an effective feature extraction method and a fast classification algorithm. The proposed system has been evaluated in a real and complex environment in both line-of-sight (LOS) and none-line-of-sight (NLOS) scenarios, and the results validate the performance of the proposed system. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Graphical abstract

Open AccessFeature PaperArticle
Human Action Recognition from Multiple Views Based on View-Invariant Feature Descriptor Using Support Vector Machines
Appl. Sci. 2016, 6(10), 309; https://doi.org/10.3390/app6100309 - 21 Oct 2016
Cited by 14
Abstract
This paper presents a novel feature descriptor for multiview human action recognition. This descriptor employs the region-based features extracted from the human silhouette. To achieve this, the human silhouette is divided into regions in a radial fashion with the interval of a certain [...] Read more.
This paper presents a novel feature descriptor for multiview human action recognition. This descriptor employs the region-based features extracted from the human silhouette. To achieve this, the human silhouette is divided into regions in a radial fashion with the interval of a certain degree, and then region-based geometrical and Hu-moments features are obtained from each radial bin to articulate the feature descriptor. A multiclass support vector machine classifier is used for action classification. The proposed approach is quite simple and achieves state-of-the-art results without compromising the efficiency of the recognition process. Our contribution is two-fold. Firstly, our approach achieves high recognition accuracy with simple silhouette-based representation. Secondly, the average testing time for our approach is 34 frames per second, which is much higher than the existing methods and shows its suitability for real-time applications. The extensive experiments on a well-known multiview IXMAS (INRIA Xmas Motion Acquisition Sequences) dataset confirmed the superior performance of our method as compared to similar state-of-the-art methods. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Graphical abstract

Review

Jump to: Research

Open AccessReview
A Comprehensive Review on Handcrafted and Learning-Based Action Representation Approaches for Human Activity Recognition
Appl. Sci. 2017, 7(1), 110; https://doi.org/10.3390/app7010110 - 23 Jan 2017
Cited by 47
Abstract
Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. [...] Read more.
Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Figure 1

Back to TopTop