sensors-logo

Journal Browser

Journal Browser

Special Issue "Multi-Modal Sensors for Human Behavior Monitoring"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 September 2019).

Special Issue Editors

Prof. Giuseppe Boccignone
E-Mail Website
Guest Editor
Dept. of Computer Science, University of Milan, Via Celoria 18 Milan 20133 Italy
Interests: affective and physiological computing; computational vision; intelligent systems; Bayesian modelling; machine learning
Dr. Paolo Napoletano
E-Mail Website
Guest Editor
DISCo (Department of Informatics, Systems and Communication), University of Milan-Bicocca, Viale Sarca 336, Milan, Italy
Interests: computer vision and image analysis; intelligent systems; deep learning; machine learning; biomedical signal processing; wearable devices
Prof. Raimondo Schettini
E-Mail Website
Guest Editor
Dipartimento di Chimica e Biologia "A. Zambelli" , Università di Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano, SA, Italy
Interests: computer vision and image analysis; intelligent systems; deep learning; machine learning; color imaging

Special Issue Information

Dear Colleagues,

in everyday life we are surrounded by various sensors, wearable and not, that explicitly or implicitly record information on our behavior either visible and hidden (e.g.  physiological activity).

Such sensors are of different nature: accelerometer, gyroscope, camera, electrodermal activity sensor, heart rate monitor, breath rate monitor and others.

Most important, the multimodal nature of data is apt to sense and understand the many facets of human daily-life behavior from physical, voluntary activities to social signaling and lifestyle choices influenced by affect, personal traits, age and social context.

The intelligent sensing community is able to exploit the data acquired with these sensors in order to develop machine-learning-based techniques, which can help in improving predictive models of human behavior.

The purpose of this special issue is to gather the latest research in the field of human behavior monitoring, both at the sensing and the understanding levels,  by using multimodal data sources.  

Applications of interest can relate to domotics, healthcare, transport, education, safety aid, entertainment, sports and others.

Given the need for data in this field of research, scientific works that present data collections are also welcome.

Therefore, contributions to this Special Issue may include, but are not limited to:

  • Novel sensing techniques for the non-invasive measurement of physiological signals
  • Internet-of-Things based architecture for multimodal monitoring of human behaviour
  • Learning and inference from multimodal sensory data
  • Real-time multimodal activity recognition
  • Semantic interpretation of multimodal sensory data
  • Multimodal sensors fusion techniques
  • Multimodal databases and benchmarks for behavior monitoring and understanding.

Prof. Giuseppe Boccignone
Dr. Paolo Napoletano
Prof. Raimondo Schettini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Activity monitoring
  • Emotion prediction
  • Stress detection
  • Fatigue detection
  • Fall detection
  • Sport-related activity monitoring
  • Health monitoring
  • Pervasive healthcare
  • IoT based monitoring systems
  • Machine learning
  • Benchmark
  • Physiological sensors
  • Wearable sensors

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
A Model-Based System for Real-Time Articulated Hand Tracking Using a Simple Data Glove and a Depth Camera
Sensors 2019, 19(21), 4680; https://doi.org/10.3390/s19214680 - 28 Oct 2019
Abstract
Tracking detailed hand motion is a fundamental research topic in the area of human-computer interaction (HCI) and has been widely studied for decades. Existing solutions with single-model inputs either require tedious calibration, are expensive or lack sufficient robustness and accuracy due to occlusions. [...] Read more.
Tracking detailed hand motion is a fundamental research topic in the area of human-computer interaction (HCI) and has been widely studied for decades. Existing solutions with single-model inputs either require tedious calibration, are expensive or lack sufficient robustness and accuracy due to occlusions. In this study, we present a real-time system to reconstruct the exact hand motion by iteratively fitting a triangular mesh model to the absolute measurement of hand from a depth camera under the robust restriction of a simple data glove. We redefine and simplify the function of the data glove to lighten its limitations, i.e., tedious calibration, cumbersome equipment, and hampering movement and keep our system lightweight. For accurate hand tracking, we introduce a new set of degrees of freedom (DoFs), a shape adjustment term for personalizing the triangular mesh model, and an adaptive collision term to prevent self-intersection. For efficiency, we extract a strong pose-space prior to the data glove to narrow the pose searching space. We also present a simplified approach for computing tracking correspondences without the loss of accuracy to reduce computation cost. Quantitative experiments show the comparable or increased accuracy of our system over the state-of-the-art with about 40% improvement in robustness. Besides, our system runs independent of Graphic Processing Unit (GPU) and reaches 40 frames per second (FPS) at about 25% Central Processing Unit (CPU) usage. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Open AccessArticle
Practical and Durable Flexible Strain Sensors Based on Conductive Carbon Black and Silicone Blends for Large Scale Motion Monitoring Applications
Sensors 2019, 19(20), 4553; https://doi.org/10.3390/s19204553 - 19 Oct 2019
Abstract
Presented is a flexible capacitive strain sensor, based on the low cost materials silicone (PDMS) and carbon black (CB), that was fabricated by casting and curing of successive silicone layers—a central PDMS dielectric layer bounded by PDMS/CB blend electrodes and packaged by exterior [...] Read more.
Presented is a flexible capacitive strain sensor, based on the low cost materials silicone (PDMS) and carbon black (CB), that was fabricated by casting and curing of successive silicone layers—a central PDMS dielectric layer bounded by PDMS/CB blend electrodes and packaged by exterior PDMS films. It was effectively characterized for large flexion-angle motion wearable applications, with strain sensing properties assessed over large strains (50%) and variations in temperature and humidity. Additionally, suitability for monitoring large tissue deformation was established by integration with an in vitro digestive model. The capacitive gauge factor was approximately constant at 0.86 over these conditions for the linear strain range (3 to 47%). Durability was established from consistent relative capacitance changes over 10,000 strain cycles, with varying strain frequency and elongation up to 50%. Wearability and high flexion angle human motion detection were demonstrated by integration with an elbow band, with clear detection of motion ranges up 90°. The device’s simple structure and fabrication method, low-cost materials and robust performance, offer promise for expanding the availability of wearable sensor systems. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Open AccessArticle
Use of Machine Learning and Wearable Sensors to Predict Energetics and Kinematics of Cutting Maneuvers
Sensors 2019, 19(14), 3094; https://doi.org/10.3390/s19143094 - 12 Jul 2019
Abstract
Changes of directions and cutting maneuvers, including 180-degree turns, are common locomotor actions in team sports, implying high mechanical load. While the mechanics and neurophysiology of turns have been extensively studied in laboratory conditions, modern inertial measurement units allow us to monitor athletes [...] Read more.
Changes of directions and cutting maneuvers, including 180-degree turns, are common locomotor actions in team sports, implying high mechanical load. While the mechanics and neurophysiology of turns have been extensively studied in laboratory conditions, modern inertial measurement units allow us to monitor athletes directly on the field. In this study, we applied four supervised machine learning techniques (linear regression, support vector regression/machine, boosted decision trees and artificial neural networks) to predict turn direction, speed (before/after turn) and the related positive/negative mechanical work. Reference values were computed using an optical motion capture system. We collected data from 13 elite female soccer players performing a shuttle run test, wearing a six-axes inertial sensor at the pelvis level. A set of 18 features (predictors) were obtained from accelerometers, gyroscopes and barometer readings. Turn direction classification returned good results (accuracy > 98.4%) with all methods. Support vector regression and neural networks obtained the best performance in the estimation of positive/negative mechanical work (coefficient of determination R2 = 0.42–0.43, mean absolute error = 1.14–1.41 J) and running speed before/after the turns (R2 = 0.66–0.69, mean absolute error = 0.15–018 m/s). Although models can be extended to different angles, we showed that meaningful information on turn kinematics and energetics can be obtained from inertial units with a data-driven approach. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Open AccessArticle
A Wireless Visualized Sensing System with Prosthesis Pose Reconstruction for Total Knee Arthroplasty
Sensors 2019, 19(13), 2909; https://doi.org/10.3390/s19132909 - 01 Jul 2019
Abstract
The surgery quality of the total knee arthroplasty (TKA) depends on how accurate the knee prosthesis is implanted. The knee prosthesis is composed of the femoral component, the plastic spacer and the tibia component. The instant and kinetic relative pose of the knee [...] Read more.
The surgery quality of the total knee arthroplasty (TKA) depends on how accurate the knee prosthesis is implanted. The knee prosthesis is composed of the femoral component, the plastic spacer and the tibia component. The instant and kinetic relative pose of the knee prosthesis is one key aspect for the surgery quality evaluation. In this work, a wireless visualized sensing system with the instant and kinetic prosthesis pose reconstruction has been proposed and implemented. The system consists of a multimodal sensing device, a wireless data receiver and a data processing workstation. The sensing device has the identical shape and size as the spacer. During the surgery, the sensing device temporarily replaces the spacer and captures the images and the contact force distribution inside the knee joint prosthesis. It is connected to the external data receiver wirelessly through a 432 MHz data link, and the data is then sent to the workstation for processing. The signal processing method to analyze the instant and kinetic prosthesis pose from the image data has been investigated. Experiments on the prototype system show that the absolute reconstruction errors of the flexion-extension rotation angle (the pitch rotation of the femoral component around the horizontal long axis of the spacer), the internal–external rotation (the yaw rotation of the femoral component around the spacer vertical axis) and the mediolateral translation displacement between the centers of the femoral component and the spacer based on the image data are less than 1.73°, 1.08° and 1.55 mm, respectively. It provides a force balance measurement with error less than ±5 N. The experiments also show that kinetic pose reconstruction can be used to detect the surgery defection that cannot be detected by the force measurement or instant pose reconstruction. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Open AccessArticle
3D Pose Detection of Closely Interactive Humans Using Multi-View Cameras
Sensors 2019, 19(12), 2831; https://doi.org/10.3390/s19122831 - 25 Jun 2019
Abstract
We propose a method to automatically detect 3D poses of closely interactive humans from sparse multi-view images at one time instance. It is a challenging problem due to the strong partial occlusion and truncation between humans and no tracking process to provide priori [...] Read more.
We propose a method to automatically detect 3D poses of closely interactive humans from sparse multi-view images at one time instance. It is a challenging problem due to the strong partial occlusion and truncation between humans and no tracking process to provide priori poses information. To solve this problem, we first obtain 2D joints in every image using OpenPose and human semantic segmentation results from Mask R-CNN. With the 3D joints triangulated from multi-view 2D joints, a two-stage assembling method is proposed to select the correct 3D pose from thousands of pose seeds combined by joint semantic meanings. We further present a novel approach to minimize the interpenetration between human shapes with close interactions. Finally, we test our method on multi-view human-human interaction (MHHI) datasets. Experimental results demonstrate that our method achieves high visualized correct rate and outperforms the existing method in accuracy and real-time capability. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Open AccessArticle
Gyroscope-Based Continuous Human Hand Gesture Recognition for Multi-Modal Wearable Input Device for Human Machine Interaction
Sensors 2019, 19(11), 2562; https://doi.org/10.3390/s19112562 - 05 Jun 2019
Abstract
Human hand gestures are a widely accepted form of real-time input for devices providing a human-machine interface. However, hand gestures have limitations in terms of effectively conveying the complexity and diversity of human intentions. This study attempted to address these limitations by proposing [...] Read more.
Human hand gestures are a widely accepted form of real-time input for devices providing a human-machine interface. However, hand gestures have limitations in terms of effectively conveying the complexity and diversity of human intentions. This study attempted to address these limitations by proposing a multi-modal input device, based on the observation that each application program requires different user intentions (and demanding functions) and the machine already acknowledges the running application. When the running application changes, the same gesture now offers a new function required in the new application, and thus, we can greatly reduce the number and complexity of required hand gestures. As a simple wearable sensor, we employ one miniature wireless three-axis gyroscope, the data of which are processed by correlation analysis with normalized covariance for continuous gesture recognition. Recognition accuracy is improved by considering both gesture patterns and signal strength and by incorporating a learning mode. In our system, six unit hand gestures successfully provide most functions offered by multiple input devices. The characteristics of our approach are automatically adjusted by acknowledging the application programs or learning user preferences. In three application programs, the approach shows good accuracy (90–96%), which is very promising in terms of designing a unified solution. Furthermore, the accuracy reaches 100% as the users become more familiar with the system. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Open AccessArticle
A Textile Sensor for Long Durations of Human Motion Capture
Sensors 2019, 19(10), 2369; https://doi.org/10.3390/s19102369 - 23 May 2019
Cited by 2
Abstract
Human posture and movement analysis is important in the areas of rehabilitation, sports medicine, and virtual training. However, the development of sensors with good accuracy, low cost, light weight, and suitability for long durations of human motion capture is still an ongoing issue. [...] Read more.
Human posture and movement analysis is important in the areas of rehabilitation, sports medicine, and virtual training. However, the development of sensors with good accuracy, low cost, light weight, and suitability for long durations of human motion capture is still an ongoing issue. In this paper, a new flexible textile sensor for knee joint movement measurements was developed by using ordinary fabrics and conductive yarns. An electrogoniometer was adopted as a standard reference to calibrate the proposed sensor and validate its accuracy. The knee movements of different daily activities were performed to evaluate the performance of the sensor. The results show that the proposed sensor could be used to monitor knee joint motion in everyday life with acceptable accuracy. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Back to TopTop