sensors-logo

Journal Browser

Journal Browser

Special Issue "Computer Vision Techniques Applied to Human Behaviour Analysis in the Real-World"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 25 October 2022 | Viewed by 3687

Special Issue Editors

Dr. Oya Celiktutan
E-Mail Website
Guest Editor
Centre for Robotics Research, Department of Engineering, King’s College London, London WC2R 2LS, UK
Interests: computer vision; machine learning; human behaviour analysis and synthesis; social signal processing; human–robot interaction
Prof. Dr. Albert Ali Salah
E-Mail Website
Guest Editor
1. Department of Information and Computing Sciences, Utrecht University, 3584CC Utrecht, The Netherlands
2. Department of Computer Engineering, Bogazici University, 34342 Bebek, Istanbul, Turkey
Interests: Machine Learning; Pattern Recognition; Computer Vision; Multimedia Methods; Behaviour Analysis
Prof. Dr. Dongmei Jiang
E-Mail
Guest Editor
School of Computer Science, Northwestern Polytechnical University, Xi'an 710072, China
Interests: multimodal affective computing; human-computer interaction; mental health assessment and management

Special Issue Information

Dear Colleagues,

We are happy to invite you to submit a paper for the special issue “Computer Vision Techniques Applied to Human Behaviour Analysis in the Real-World”. The details can be found below.

Intelligent devices, such as smart wearables, intelligent vehicles, virtual assistants and robots, are progressively becoming widespread in many aspects of our daily lives, where effective interaction is increasingly desirable. In such applications, the more information exchanged between the user and the system through multiple modalities, the more versatile, efficient and natural the interaction becomes. Currently, modern intelligent devices do not take into account the user state sufficiently into consideration and thus suffer from a lack of personalization and low engagement. In particular, interaction logs and verbal data alone are not adequate for genuinely interpreting human behaviours, and therefore there has been a significant effort to analyse human behaviours from video data. Although significant progress has been made so far, there is still much room for improvement in moving from controlled and acted settings to real-world settings. The key aim of this Special Issue is to bring together cutting edge research and innovative computer vision techniques applied to human behaviour analysis, from the recognition of gestures and activities to the interpretation of these cues at a higher level for predicting cognitive, social and emotional states.

Special issue topics include, but are not limited to:

  • Unsupervised, semi-supervised and supervised learning-based approaches to human behaviour analysis
  • Face, gesture and body analysis
  • Activity recognition and anticipation
  • Affect and emotion recognition
  • Interactive behaviour analysis, including multiparty interaction, human-computer interaction, human-robot interaction
  • Combining vision with other modalities (e.g., audio, biosignals) for human behaviour analysis
  • Societal and ethical considerations of human behaviour analysis, including explainability, bias, fairness, privacy
  • Real-time systems for human behaviour analysis on devices with limited on-board computational power
  • Databases and open source tools for human behaviour analysis
  • Applications in education, healthcare, smart environments, or any related field

Dr. Oya Celiktutan

Prof. Albert Ali Salah

Prof. Dongmei Jiang

Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human behaviour analysis
  • computer vision
  • machine learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Gaze Estimation Approach Using Deep Differential Residual Network
Sensors 2022, 22(14), 5462; https://doi.org/10.3390/s22145462 - 21 Jul 2022
Viewed by 393
Abstract
Gaze estimation, which is a method to determine where a person is looking at given the person’s full face, is a valuable clue for understanding human intention. Similarly to other domains of computer vision, deep learning (DL) methods have gained recognition in the [...] Read more.
Gaze estimation, which is a method to determine where a person is looking at given the person’s full face, is a valuable clue for understanding human intention. Similarly to other domains of computer vision, deep learning (DL) methods have gained recognition in the gaze estimation domain. However, there are still gaze calibration problems in the gaze estimation domain, thus preventing existing methods from further improving the performances. An effective solution is to directly predict the difference information of two human eyes, such as the differential network (Diff-Nn). However, this solution results in a loss of accuracy when using only one inference image. We propose a differential residual model (DRNet) combined with a new loss function to make use of the difference information of two eye images. We treat the difference information as auxiliary information. We assess the proposed model (DRNet) mainly using two public datasets (1) MpiiGaze and (2) Eyediap. Considering only the eye features, DRNet outperforms the state-of-the-art gaze estimation methods with angular-error of 4.57 and 6.14 using MpiiGaze and Eyediap datasets, respectively. Furthermore, the experimental results also demonstrate that DRNet is extremely robust to noise images. Full article
Show Figures

Figure 1

Article
Micro-Expression Recognition Based on Optical Flow and PCANet+
Sensors 2022, 22(11), 4296; https://doi.org/10.3390/s22114296 - 05 Jun 2022
Viewed by 571
Abstract
Micro-expressions are rapid and subtle facial movements. Different from ordinary facial expressions in our daily life, micro-expressions are very difficult to detect and recognize. In recent years, due to a wide range of potential applications in many domains, micro-expression recognition has aroused extensive [...] Read more.
Micro-expressions are rapid and subtle facial movements. Different from ordinary facial expressions in our daily life, micro-expressions are very difficult to detect and recognize. In recent years, due to a wide range of potential applications in many domains, micro-expression recognition has aroused extensive attention from computer vision. Because available micro-expression datasets are very small, deep neural network models with a huge number of parameters are prone to over-fitting. In this article, we propose an OF-PCANet+ method for micro-expression recognition, in which we design a spatiotemporal feature learning strategy based on shallow PCANet+ model, and we incorporate optical flow sequence stacking with the PCANet+ network to learn discriminative spatiotemporal features. We conduct comprehensive experiments on publicly available SMIC and CASME2 datasets. The results show that our lightweight model obviously outperforms popular hand-crafted methods and also achieves comparable performances with deep learning based methods, such as 3D-FCNN and ELRCN. Full article
Show Figures

Figure 1

Article
A Spatiotemporal Deep Learning Approach for Automatic Pathological Gait Classification
Sensors 2021, 21(18), 6202; https://doi.org/10.3390/s21186202 - 16 Sep 2021
Viewed by 851
Abstract
Human motion analysis provides useful information for the diagnosis and recovery assessment of people suffering from pathologies, such as those affecting the way of walking, i.e., gait. With recent developments in deep learning, state-of-the-art performance can now be achieved using a single 2D-RGB-camera-based [...] Read more.
Human motion analysis provides useful information for the diagnosis and recovery assessment of people suffering from pathologies, such as those affecting the way of walking, i.e., gait. With recent developments in deep learning, state-of-the-art performance can now be achieved using a single 2D-RGB-camera-based gait analysis system, offering an objective assessment of gait-related pathologies. Such systems provide a valuable complement/alternative to the current standard practice of subjective assessment. Most 2D-RGB-camera-based gait analysis approaches rely on compact gait representations, such as the gait energy image, which summarize the characteristics of a walking sequence into one single image. However, such compact representations do not fully capture the temporal information and dependencies between successive gait movements. This limitation is addressed by proposing a spatiotemporal deep learning approach that uses a selection of key frames to represent a gait cycle. Convolutional and recurrent deep neural networks were combined, processing each gait cycle as a collection of silhouette key frames, allowing the system to learn temporal patterns among the spatial features extracted at individual time instants. Trained with gait sequences from the GAIT-IT dataset, the proposed system is able to improve gait pathology classification accuracy, outperforming state-of-the-art solutions and achieving improved generalization on cross-dataset tests. Full article
Show Figures

Figure 1

Back to TopTop