E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "HCI In Smart Environments"

Quicklinks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (28 February 2015)

Special Issue Editors

Guest Editor
Dr. Gianluca Paravati

Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy
Website | E-Mail
Fax: +39 011 090 7099
Interests: multimodal applications; human-machine interaction; image processing; target tracking; 3D rendering; virtual reality; augmented and mixed reality; remote visualization
Guest Editor
Dr. Valentina Gatteschi

Department of Control and Computer Engineering, Politecnico di Torino Corso Duca degli Abruzzi 24, I-10129 Torino, Italy
Website | E-Mail
Fax: +39 011 090 7099
Interests: visual odometry; remote visualization; semantics; natural language processing

Special Issue Information

Dear Colleagues,

Sensors continue to rapidly evolve becoming increasingly smaller, cheaper, accurate, reliable, efficient, responsive and also including communication capability. These key factors, as well as the availability of new technologies, are contributing to the growth of the market of consumer electronics sensors, thus reducing their costs. This scenario fosters the integration of sensors in the everyday objects of our lives, thus moving towards the creation of Smart Environments, which are aimed at making human interaction with systems a pleasant experience. In turn, it is possible to imagine new applications never envisioned a few years ago in a wide variety of areas (e.g. in Entertainment and Virtual Reality, Smart Home, Smart City, Medicine and Health care, Indoor Navigation, Automotive, Automation and Maintenance, etc.).

The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors (such as touch, motion, wearable, image, proximity and position sensors, etc.) in current and emerging applications for interacting with Smart Environments. Authors are encouraged to submit original research, reviews, and high rated manuscripts concerning (but not limited to) the following topics:

 

  • Human-Computer Interaction
  • Multimodal Systems and Interfaces
  • Natural User Interfaces
  • Mobile and Wearable Computing
  • Virtual and Augmented Reality
  • Assistive Technologies
  • Sensor data fusion in multi-sensor systems
  • Semantic technologies for multimodal integration of sensor data
  • Sensor knowledge representation
  • Annotation of sensor data
  • Innovative sensing devices
  • Innovative uses of existing sensors
  • Pervasive/Ubiquitous Computing

Dr. Gianluca Paravati
Dr. Valentina Gatteschi
Guest Editors

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs).


Keywords

  • smart environments
  • human computer interaction
  • multimodal systems
  • innovative sensing devices
  • natural user interfaces
  • sensor data fusion
  • semantic technologies for multimodal integration of sensor data

Published Papers (23 papers)

View options order results:
result details:
Displaying articles 1-23
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Human-Computer Interaction in Smart Environments
Sensors 2015, 15(8), 19487-19494; doi:10.3390/s150819487
Received: 1 August 2015 / Accepted: 6 August 2015 / Published: 7 August 2015
Cited by 1 | PDF Full-text (663 KB) | HTML Full-text | XML Full-text
Abstract
Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting
[...] Read more.
Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues. Full article
(This article belongs to the Special Issue HCI In Smart Environments)

Research

Jump to: Editorial, Review

Open AccessArticle Augmented Robotics Dialog System for Enhancing Human–Robot Interaction
Sensors 2015, 15(7), 15799-15829; doi:10.3390/s150715799
Received: 6 March 2015 / Revised: 21 May 2015 / Accepted: 23 June 2015 / Published: 3 July 2015
Cited by 3 | PDF Full-text (14170 KB) | HTML Full-text | XML Full-text
Abstract
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory
[...] Read more.
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human–robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot’s pro-activeness during a human–robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Gaze-Assisted User Intention Prediction for Initial Delay Reduction in Web Video Access
Sensors 2015, 15(6), 14679-14700; doi:10.3390/s150614679
Received: 25 February 2015 / Revised: 13 June 2015 / Accepted: 16 June 2015 / Published: 19 June 2015
Cited by 3 | PDF Full-text (4195 KB) | HTML Full-text | XML Full-text
Abstract
Despite the remarkable improvement of hardware and network technology, the inevitable delay from a user’s command action to a system response is still one of the most crucial influence factors in user experiences (UXs). Especially for a web video service, an initial delay
[...] Read more.
Despite the remarkable improvement of hardware and network technology, the inevitable delay from a user’s command action to a system response is still one of the most crucial influence factors in user experiences (UXs). Especially for a web video service, an initial delay from click action to video start has significant influences on the quality of experience (QoE). The initial delay of a system can be minimized by preparing execution based on predicted user’s intention prior to actual command action. The introduction of the sequential and concurrent flow of resources in human cognition and behavior can significantly improve the accuracy and preparation time for intention prediction. This paper introduces a threaded interaction model and applies it to user intention prediction for initial delay reduction in web video access. The proposed technique consists of a candidate selection module, a decision module and a preparation module that prefetches and preloads the web video data before a user’s click action. The candidate selection module selects candidates in the web page using proximity calculation around a cursor. Meanwhile, the decision module computes the possibility of actual click action based on the cursor-gaze relationship. The preparation activates the prefetching for the selected candidates when the click possibility exceeds a certain limit in the decision module. Experimental results show a 92% hit-ratio, 0.5-s initial delay on average and 1.5-s worst initial delay, which is much less than a user’s tolerable limit in web video access, demonstrating significant improvement of accuracy and advance time in intention prediction by introducing the proposed threaded interaction model. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Figures

Open AccessArticle Assessing Visual Attention Using Eye Tracking Sensors in Intelligent Cognitive Therapies Based on Serious Games
Sensors 2015, 15(5), 11092-11117; doi:10.3390/s150511092
Received: 24 February 2015 / Revised: 22 April 2015 / Accepted: 27 April 2015 / Published: 12 May 2015
Cited by 5 | PDF Full-text (19429 KB) | HTML Full-text | XML Full-text
Abstract
This study examines the use of eye tracking sensors as a means to identify children’s behavior in attention-enhancement therapies. For this purpose, a set of data collected from 32 children with different attention skills is analyzed during their interaction with a set of
[...] Read more.
This study examines the use of eye tracking sensors as a means to identify children’s behavior in attention-enhancement therapies. For this purpose, a set of data collected from 32 children with different attention skills is analyzed during their interaction with a set of puzzle games. The authors of this study hypothesize that participants with better performance may have quantifiably different eye-movement patterns from users with poorer results. The use of eye trackers outside the research community may help to extend their potential with available intelligent therapies, bringing state-of-the-art technologies to users. The use of gaze data constitutes a new information source in intelligent therapies that may help to build new approaches that are fully-customized to final users’ needs. This may be achieved by implementing machine learning algorithms for classification. The initial study of the dataset has proven a 0.88 (±0.11) classification accuracy with a random forest classifier, using cross-validation and hierarchical tree-based feature selection. Further approaches need to be examined in order to establish more detailed attention behaviors and patterns among children with and without attention problems. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle An Informationally Structured Room for Robotic Assistance
Sensors 2015, 15(4), 9438-9465; doi:10.3390/s150409438
Received: 16 January 2015 / Revised: 1 April 2015 / Accepted: 3 April 2015 / Published: 22 April 2015
Cited by 4 | PDF Full-text (8723 KB) | HTML Full-text | XML Full-text
Abstract
The application of assistive technologies for elderly people is one of the most promising and interesting scenarios for intelligent technologies in the present and near future. Moreover, the improvement of the quality of life for the elderly is one of the first priorities
[...] Read more.
The application of assistive technologies for elderly people is one of the most promising and interesting scenarios for intelligent technologies in the present and near future. Moreover, the improvement of the quality of life for the elderly is one of the first priorities in modern countries and societies. In this work, we present an informationally structured room that is aimed at supporting the daily life activities of elderly people. This room integrates different sensor modalities in a natural and non-invasive way inside the environment. The information gathered by the sensors is processed and sent to a centralized management system, which makes it available to a service robot assisting the people. One important restriction of our intelligent room is reducing as much as possible any interference with daily activities. Finally, this paper presents several experiments and situations using our intelligent environment in cooperation with our service robot. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller
Sensors 2015, 15(4), 8642-8663; doi:10.3390/s150408642
Received: 12 January 2015 / Revised: 26 March 2015 / Accepted: 31 March 2015 / Published: 14 April 2015
Cited by 7 | PDF Full-text (3534 KB) | HTML Full-text | XML Full-text
Abstract
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted
[...] Read more.
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Figures

Open AccessArticle Brain Process for Perception of the “Out of the Body” Tactile Illusion for Virtual Object Interaction
Sensors 2015, 15(4), 7913-7932; doi:10.3390/s150407913
Received: 15 November 2014 / Revised: 11 March 2015 / Accepted: 24 March 2015 / Published: 1 April 2015
Cited by 3 | PDF Full-text (2316 KB) | HTML Full-text | XML Full-text
Abstract
“Out of the body” tactile illusion refers to the phenomenon in which one can perceive tactility as if emanating from a location external to the body without any stimulator present there. Taking advantage of such a tactile illusion is one way to provide
[...] Read more.
“Out of the body” tactile illusion refers to the phenomenon in which one can perceive tactility as if emanating from a location external to the body without any stimulator present there. Taking advantage of such a tactile illusion is one way to provide and realize richer interaction feedback without employing and placing actuators directly at all stimulation target points. However, to further explore its potential, it is important to better understand the underlying physiological and neural mechanism. As such, we measured the brain wave patterns during such tactile illusion and mapped out the corresponding brain activation areas. Participants were given stimulations at different levels with the intention to create veridical (i.e., non-illusory) and phantom sensations at different locations along an external hand-held virtual ruler. The experimental data and analysis indicate that both veridical and illusory sensations involve, among others, the parietal lobe, one of the most important components in the tactile information pathway. In addition, we found that as for the illusory sensation, there is an additional processing resulting in the delay for the ERP (event-related potential) and involvement by the limbic lobe. These point to regarding illusion as a memory and recognition task as a possible explanation. The present study demonstrated some basic understanding; how humans process “virtual” objects and the way associated tactile illusion is generated will be valuable for HCI (Human-Computer Interaction). Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Adaptive Software Architecture Based on Confident HCI for the Deployment of Sensitive Services in Smart Homes
Sensors 2015, 15(4), 7294-7322; doi:10.3390/s150407294
Received: 15 January 2015 / Revised: 17 March 2015 / Accepted: 19 March 2015 / Published: 25 March 2015
Cited by 3 | PDF Full-text (3274 KB) | HTML Full-text | XML Full-text
Abstract
Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the
[...] Read more.
Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Design of a Mobile Brain Computer Interface-Based Smart Multimedia Controller
Sensors 2015, 15(3), 5518-5530; doi:10.3390/s150305518
Received: 2 January 2015 / Revised: 11 February 2015 / Accepted: 2 March 2015 / Published: 6 March 2015
Cited by 6 | PDF Full-text (2148 KB) | HTML Full-text | XML Full-text
Abstract
Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable
[...] Read more.
Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user’s physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user’s physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user’s EEG feature and select music according his/her state. The relationship between the user’s state and music sorted by listener’s preference was also examined in this study. The experimental results show that real-time music biofeedback according a user’s EEG feature may positively improve the user’s attention state. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Human Computer Interactions in Next-Generation of Aircraft Smart Navigation Management Systems: Task Analysis and Architecture under an Agent-Oriented Methodological Approach
Sensors 2015, 15(3), 5228-5250; doi:10.3390/s150305228
Received: 15 January 2015 / Revised: 12 February 2015 / Accepted: 16 February 2015 / Published: 4 March 2015
Cited by 3 | PDF Full-text (2544 KB) | HTML Full-text | XML Full-text
Abstract
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers
[...] Read more.
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Biosignal Analysis to Assess Mental Stress in Automatic Driving of Trucks: Palmar Perspiration and Masseter Electromyography
Sensors 2015, 15(3), 5136-5150; doi:10.3390/s150305136
Received: 12 January 2015 / Revised: 16 February 2015 / Accepted: 17 February 2015 / Published: 2 March 2015
Cited by 4 | PDF Full-text (1271 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis
[...] Read more.
Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Adding Pluggable and Personalized Natural Control Capabilities to Existing Applications
Sensors 2015, 15(2), 2832-2859; doi:10.3390/s150202832
Received: 28 October 2014 / Revised: 21 January 2015 / Accepted: 26 January 2015 / Published: 28 January 2015
Cited by 2 | PDF Full-text (2871 KB) | HTML Full-text | XML Full-text
Abstract
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles
[...] Read more.
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles and smartphones). However, to allow existing applications running on desktop computers to utilize natural interaction, significant re-design and re-coding efforts may be required. In this paper, a framework designed to transparently add multi-modal interaction capabilities to applications to which users are accustomed is presented. Experimental observations confirmed the effectiveness of the proposed framework and led to a classification of those applications that could benefit more from the availability of natural interaction modalities. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Eye/Head Tracking Technology to Improve HCI with iPad Applications
Sensors 2015, 15(2), 2244-2264; doi:10.3390/s150202244
Received: 6 November 2014 / Revised: 28 November 2014 / Accepted: 12 January 2015 / Published: 22 January 2015
Cited by 5 | PDF Full-text (826 KB) | HTML Full-text | XML Full-text
Abstract
In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad’s front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control
[...] Read more.
In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad’s front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Figures

Open AccessArticle Single-Sample Face Recognition Based on Intra-Class Differences in a Variation Model
Sensors 2015, 15(1), 1071-1087; doi:10.3390/s150101071
Received: 17 September 2014 / Accepted: 10 December 2014 / Published: 8 January 2015
Cited by 7 | PDF Full-text (1701 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the
[...] Read more.
In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle A Real-Time Pinch-to-Zoom Motion Detection by Means of a Surface EMG-Based Human-Computer Interface
Sensors 2015, 15(1), 394-407; doi:10.3390/s150100394
Received: 12 October 2014 / Accepted: 10 December 2014 / Published: 29 December 2014
Cited by 6 | PDF Full-text (2488 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a system for inferring the pinch-to-zoom gesture using surface EMG (Electromyography) signals in real time. Pinch-to-zoom, which is a common gesture in smart devices such as an iPhone or an Android phone, is used to control the size
[...] Read more.
In this paper, we propose a system for inferring the pinch-to-zoom gesture using surface EMG (Electromyography) signals in real time. Pinch-to-zoom, which is a common gesture in smart devices such as an iPhone or an Android phone, is used to control the size of images or web pages according to the distance between the thumb and index finger. To infer the finger motion, we recorded EMG signals obtained from the first dorsal interosseous muscle, which is highly related to the pinch-to-zoom gesture, and used a support vector machine for classification between four finger motion distances. The powers which are estimated by Welch’s method were used as feature vectors. In order to solve the multiclass classification problem, we applied a one-versus-one strategy, since a support vector machine is basically a binary classifier. As a result, our system yields 93.38% classification accuracy averaged over six subjects. The classification accuracy was estimated using 10-fold cross validation. Through our system, we expect to not only develop practical prosthetic devices but to also construct a novel user experience (UX) for smart devices. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Face Recognition System for Set-Top Box-Based Intelligent TV
Sensors 2014, 14(11), 21726-21749; doi:10.3390/s141121726
Received: 18 April 2014 / Revised: 4 November 2014 / Accepted: 11 November 2014 / Published: 18 November 2014
Cited by 7 | PDF Full-text (4695 KB) | HTML Full-text | XML Full-text
Abstract
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can
[...] Read more.
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer’s face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer’s face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Figures

Open AccessArticle Adaptive Activity and Environment Recognition for Mobile Phones
Sensors 2014, 14(11), 20753-20778; doi:10.3390/s141120753
Received: 10 September 2014 / Revised: 16 October 2014 / Accepted: 20 October 2014 / Published: 3 November 2014
Cited by 9 | PDF Full-text (767 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, an adaptive activity and environment recognition algorithm running on a mobile phone is presented. The algorithm makes inferences based on sensor and radio receiver data provided by the phone. A wide set of features that can be extracted from these
[...] Read more.
In this paper, an adaptive activity and environment recognition algorithm running on a mobile phone is presented. The algorithm makes inferences based on sensor and radio receiver data provided by the phone. A wide set of features that can be extracted from these data sources were investigated, and a Bayesian maximum a posteriori classifier was used for classifying between several user activities and environments. The accuracy of the method was evaluated on a dataset collected in a real-life trial. In addition, comparison to other state-of-the-art classifiers, namely support vector machines and decision trees, was performed. To make the system adaptive for individual user characteristics, an adaptation algorithm for context model parameters was designed. Moreover, a confidence measure for the classification correctness was designed. The proposed adaptation algorithm and confidence measure were evaluated on a second dataset obtained from another real-life trial, where the users were requested to provide binary feedback on the classification correctness. The results show that the proposed adaptation algorithm is effective at improving the classification accuracy. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis
Sensors 2014, 14(11), 20112-20133; doi:10.3390/s141120112
Received: 4 August 2014 / Revised: 14 October 2014 / Accepted: 17 October 2014 / Published: 27 October 2014
Cited by 4 | PDF Full-text (5092 KB) | HTML Full-text | XML Full-text
Abstract
Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if
[...] Read more.
Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Estimation of Eye Closure Degree Using EEG Sensors and Its Application in Driver Drowsiness Detection
Sensors 2014, 14(9), 17491-17515; doi:10.3390/s140917491
Received: 20 August 2014 / Revised: 11 September 2014 / Accepted: 11 September 2014 / Published: 18 September 2014
Cited by 9 | PDF Full-text (3367 KB) | HTML Full-text | XML Full-text
Abstract
Currently, driver drowsiness detectors using video based technology is being widely studied. Eyelid closure degree (ECD) is the main measure of the video-based methods, however, drawbacks such as brightness limitations and practical hurdles such as distraction of the drivers limits its success. This
[...] Read more.
Currently, driver drowsiness detectors using video based technology is being widely studied. Eyelid closure degree (ECD) is the main measure of the video-based methods, however, drawbacks such as brightness limitations and practical hurdles such as distraction of the drivers limits its success. This study presents a way to compute the ECD using EEG sensors instead of video-based methods. The premise is that the ECD exhibits a linear relationship with changes of the occipital EEG. A total of 30 subjects are included in this study: ten of them participated in a simple proof-of-concept experiment to verify the linear relationship between ECD and EEG, and then twenty participated in a monotonous highway driving experiment in a driving simulator environment to test the robustness of the linear relationship in real-life applications. Taking the video-based method as a reference, the Alpha power percentage from the O2 channel is found to be the best input feature for linear regression estimation of the ECD. The best overall squared correlation coefficient (SCC, denoted by r2) and mean squared error (MSE) validated by linear support vector regression model and leave one subject out method is r2 = 0.930 and MSE = 0.013. The proposed linear EEG-ECD model can achieve 87.5% and 70.0% accuracy for male and female subjects, respectively, for a driver drowsiness application, percentage eyelid closure over the pupil over time (PERCLOS). This new ECD estimation method not only addresses the video-based method drawbacks, but also makes ECD estimation more computationally efficient and easier to implement in EEG sensors in a real time way. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Assessment of Eye Fatigue Caused by 3D Displays Based on Multimodal Measurements
Sensors 2014, 14(9), 16467-16485; doi:10.3390/s140916467
Received: 2 August 2014 / Revised: 21 August 2014 / Accepted: 2 September 2014 / Published: 4 September 2014
Cited by 9 | PDF Full-text (2018 KB) | HTML Full-text | XML Full-text
Abstract
With the development of 3D displays, user’s eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities
[...] Read more.
With the development of 3D displays, user’s eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessArticle Robust Arm and Hand Tracking by Unsupervised Context Learning
Sensors 2014, 14(7), 12023-12058; doi:10.3390/s140712023
Received: 4 April 2014 / Revised: 29 June 2014 / Accepted: 1 July 2014 / Published: 7 July 2014
Cited by 4 | PDF Full-text (14175 KB) | HTML Full-text | XML Full-text
Abstract
Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the
[...] Read more.
Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the non-rigid character of the human hand. In this paper, we propose an unsupervised method to automatically learn the context in which a hand is embedded. This context includes the arm and any other object that coherently moves along with the hand. We introduce two novel methods to incorporate this context information into a probabilistic tracking framework, and introduce a simple yet effective solution to estimate the position of the arm. Finally, we show that our method greatly increases robustness against occlusion and cluttered background, without degrading tracking performance if no contextual information is available. The proposed real-time algorithm is shown to outperform the current state-of-the-art by evaluating it on three publicly available video datasets. Furthermore, a novel dataset is created and made publicly available for the research community. Full article
(This article belongs to the Special Issue HCI In Smart Environments)

Review

Jump to: Editorial, Research

Open AccessReview Augmenting the Senses: A Review on Sensor-Based Learning Support
Sensors 2015, 15(2), 4097-4133; doi:10.3390/s150204097
Received: 24 November 2014 / Accepted: 29 January 2015 / Published: 11 February 2015
Cited by 18 | PDF Full-text (1147 KB) | HTML Full-text | XML Full-text
Abstract
In recent years sensor components have been extending classical computer-based support systems in a variety of applications domains (sports, health, etc.). In this article we review the use of sensors for the application domain of learning. For that we analyzed 82 sensor-based
[...] Read more.
In recent years sensor components have been extending classical computer-based support systems in a variety of applications domains (sports, health, etc.). In this article we review the use of sensors for the application domain of learning. For that we analyzed 82 sensor-based prototypes exploring their learning support. To study this learning support we classified the prototypes according to the Bloom’s taxonomy of learning domains and explored how they can be used to assist on the implementation of formative assessment, paying special attention to their use as feedback tools. The analysis leads to current research foci and gaps in the development of sensor-based learning support systems and concludes with a research agenda based on the findings. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Open AccessReview A Survey of Online Activity Recognition Using Mobile Phones
Sensors 2015, 15(1), 2059-2085; doi:10.3390/s150102059
Received: 4 November 2014 / Revised: 24 November 2014 / Accepted: 8 January 2015 / Published: 19 January 2015
Cited by 31 | PDF Full-text (328 KB) | HTML Full-text | XML Full-text
Abstract
Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous
[...] Read more.
Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research. Full article
(This article belongs to the Special Issue HCI In Smart Environments)

Journal Contact

MDPI AG
Sensors Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
sensors@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Sensors
Back to Top