sensors-logo

Journal Browser

Journal Browser

Special Issue "Advanced Sensing and Machine-Learning-Based Analysis of Human Behaviour and Physiology"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 31 December 2021.

Special Issue Editors

Dr. Zhaojie Ju
E-Mail Website
Guest Editor
School of Computing, University of Portsmouth, UK
Interests: machine learning; pattern recognition; robotics
Dr. Dalin Zhou
E-Mail Website
Guest Editor
School of Computing, University of Portsmouth, UK
Interests: biosensory data analysis; wearable sensors; haptics
Dr. Jinguo Liu
E-Mail Website
Guest Editor
Shenyang Institue of Automation, Chinese Academy of Sciences, Shenyang, China
Interests: intelligent robotics; machine learning; automation
Dr. Dingguo Zhang
E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, University of Bath, Bath BA27AY, UK
Interests: advanced neural interfaces; biologically inspired autonomous systems; biomedical signal processing for both in-vivo and ex-vivo applications; neuronal modeling and computing
Special Issues and Collections in MDPI journals
Dr. YongAn Huang
E-Mail Website
Guest Editor
Huazhong University of Science and Technology, Wuhan, China
Interests: flexible electronics sensors and manufacturing

Special Issue Information

Dear Colleagues,

A successful human–machine/human–robot interaction is dependent on adequate communication and understanding between humans and machines/robots during their contact. Recent development in sensing and analysis technology has enabled more efficient human–machine/human–robot interaction. Particularly, a good understanding of human behaviour and physiology allows machines/robots to interact more intuitively with users in a human-centred nature and is prioritised by a growing research interest. As a response, advanced sensing technology (wearable sensing, remote sensing, multimodal sensing, and so on) in combination with machine learning based analysis (feature engineering, classic machine learning models, deep learning approaches, and so on) keeps advancing to accommodate the needs of human–machine/human–robot systems and their applications.

This Special Issue aims to gather the most recent development in sensing- and machine-learning-based analysis with a particular focus on human behaviour and physiology, to push forward the frontier of human–machine/human–robot interaction. The scope of this Special Issue features but is not limited to the following areas:

  • Advanced sensory acquisition
  • Tactile sensor development
  • Wearable sensing device
  • Remote sensing device
  • Multimodal sensing
  • Human behaviour sensing
  • Physiology sensing and measurement
  • Sensing for human–machine interaction
  • Sensing for human–robot interaction
  • Machine-learning-based sensory data analysis
  • Deep-learning-based sensory data analysis
  • Neural networks for sensory interpreting
  • Computational intelligence in sensing and analysis

Dr. Zhaojie Ju
Dr. Dalin Zhou
Dr. Jinguo Liu
Dr. Dingguo Zhang
Dr. YongAn Huang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Tactile sensing
  • Wearable sensing
  • Human behaviour sensing
  • Physiology sensing

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Communication
Investigation on the Sampling Frequency and Channel Number for Force Myography Based Hand Gesture Recognition
Sensors 2021, 21(11), 3872; https://doi.org/10.3390/s21113872 - 03 Jun 2021
Viewed by 582
Abstract
Force myography (FMG) is a method that uses pressure sensors to measure muscle contraction indirectly. Compared with the conventional approach utilizing myoelectric signals in hand gesture recognition, it is a valuable substitute. To achieve the aim of gesture recognition at minimum cost, it [...] Read more.
Force myography (FMG) is a method that uses pressure sensors to measure muscle contraction indirectly. Compared with the conventional approach utilizing myoelectric signals in hand gesture recognition, it is a valuable substitute. To achieve the aim of gesture recognition at minimum cost, it is necessary to study the minimum sampling frequency and the minimal number of channels. For purpose of investigating the effect of sampling frequency and the number of channels on the accuracy of gesture recognition, a hardware system that has 16 channels has been designed for capturing forearm FMG signals with a maximum sampling frequency of 1 kHz. Using this acquisition equipment, a force myography database containing 10 subjects’ data has been created. In this paper, gesture accuracies under different sampling frequencies and channel’s number are obtained. Under 1 kHz sampling rate and 16 channels, four of five tested classifiers reach an accuracy up to about 99%. Other experimental results indicate that: (1) the sampling frequency of the FMG signal can be as low as 5 Hz for the recognition of static movements; (2) the reduction of channel number has a large impact on the accuracy, and the suggested channel number for gesture recognition is eight; and (3) the distribution of the sensors on the forearm would affect the recognition accuracy, and it is possible to improve the accuracy via optimizing the sensor position. Full article
Show Figures

Figure 1

Article
How to Represent Paintings: A Painting Classification Using Artistic Comments
Sensors 2021, 21(6), 1940; https://doi.org/10.3390/s21061940 - 10 Mar 2021
Cited by 1 | Viewed by 534
Abstract
The goal of large-scale automatic paintings analysis is to classify and retrieve images using machine learning techniques. The traditional methods use computer vision techniques on paintings to enable computers to represent the art content. In this work, we propose using a graph convolutional [...] Read more.
The goal of large-scale automatic paintings analysis is to classify and retrieve images using machine learning techniques. The traditional methods use computer vision techniques on paintings to enable computers to represent the art content. In this work, we propose using a graph convolutional network and artistic comments rather than the painting color to classify type, school, timeframe and author of the paintings by implementing natural language processing (NLP) techniques. First, we build a single artistic comment graph based on co-occurrence relations and document word relations and then train an art graph convolutional network (ArtGCN) on the entire corpus. The nodes, which include the words and documents in the topological graph are initialized using a one-hot representation; then, the embeddings are learned jointly for both words and documents, supervised by the known-class training labels of the paintings. Through extensive experiments on different classification tasks using different input sources, we demonstrate that the proposed methods achieve state-of-art performance. In addition, ArtGCN can learn word and painting embeddings, and we find that they have a major role in describing the labels and retrieval paintings, respectively. Full article
Show Figures

Figure 1

Article
A Bayesian Driver Agent Model for Autonomous Vehicles System Based on Knowledge-Aware and Real-Time Data
Sensors 2021, 21(2), 331; https://doi.org/10.3390/s21020331 - 06 Jan 2021
Viewed by 757
Abstract
A key research area in autonomous driving is how to model the driver’s decision-making behavior, due to the fact it is significant for a self-driving vehicles considering their traffic safety and efficiency. However, the uncertain characteristics of vehicle and pedestrian trajectories affect urban [...] Read more.
A key research area in autonomous driving is how to model the driver’s decision-making behavior, due to the fact it is significant for a self-driving vehicles considering their traffic safety and efficiency. However, the uncertain characteristics of vehicle and pedestrian trajectories affect urban roads, which poses severe challenges to the cognitive understanding and decision-making of autonomous vehicle systems in terms of accuracy and robustness. To overcome the abovementioned problems, this paper proposes a Bayesian driver agent (BDA) model which is a vision-based autonomous vehicle system with learning and inference methods inspired by human driver’s cognitive psychology. Different from the end-to-end learning method and traditional rule-based methods, our approach breaks the driving system up into a scene recognition module and a decision inference module. The perception module, which is based on a multi-task learning neural network (CNN), takes a driver’s-view image as its input and predicts the traffic scene’s feature values. The decision module based on dynamic Bayesian network (DBN) then makes an inferred decision using the traffic scene’s feature values. To explore the validity of the Bayesian driver agent model, we performed experiments on a driving simulation platform. The BDA model can extract the scene feature values effectively and predict the probability distribution of the human driver’s decision-making process accurately based on inference. We take the lane changing scenario as an example to verify the model, the intraclass correlation coefficient (ICC) correlation between the BDA model and human driver’s decision process reached 0.984. This work suggests a research in scene perception and autonomous decision-making that may apply to autonomous vehicle system. Full article
Show Figures

Figure 1

Review

Jump to: Research

Review
Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning
Sensors 2021, 21(4), 1278; https://doi.org/10.3390/s21041278 - 11 Feb 2021
Cited by 4 | Viewed by 1096
Abstract
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent [...] Read more.
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed. Full article
Show Figures

Figure 1

Back to TopTop