sensors-logo

Journal Browser

Journal Browser

HCI In Smart Environments

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (28 February 2015) | Viewed by 221940

Special Issue Editors


E-Mail Website
Guest Editor
Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy
Interests: multimodal applications; human-machine interaction; image processing; target tracking; 3D rendering; virtual reality; augmented and mixed reality; remote visualization

E-Mail Website
Guest Editor
Department of Control and Computer Engineering, Politecnico di Torino Corso Duca degli Abruzzi 24, I-10129 Torino, Italy
Interests: visual odometry; remote visualization; semantics; natural language processing

Special Issue Information

Dear Colleagues,

Sensors continue to rapidly evolve becoming increasingly smaller, cheaper, accurate, reliable, efficient, responsive and also including communication capability. These key factors, as well as the availability of new technologies, are contributing to the growth of the market of consumer electronics sensors, thus reducing their costs. This scenario fosters the integration of sensors in the everyday objects of our lives, thus moving towards the creation of Smart Environments, which are aimed at making human interaction with systems a pleasant experience. In turn, it is possible to imagine new applications never envisioned a few years ago in a wide variety of areas (e.g. in Entertainment and Virtual Reality, Smart Home, Smart City, Medicine and Health care, Indoor Navigation, Automotive, Automation and Maintenance, etc.).

The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors (such as touch, motion, wearable, image, proximity and position sensors, etc.) in current and emerging applications for interacting with Smart Environments. Authors are encouraged to submit original research, reviews, and high rated manuscripts concerning (but not limited to) the following topics:

 

  • Human-Computer Interaction
  • Multimodal Systems and Interfaces
  • Natural User Interfaces
  • Mobile and Wearable Computing
  • Virtual and Augmented Reality
  • Assistive Technologies
  • Sensor data fusion in multi-sensor systems
  • Semantic technologies for multimodal integration of sensor data
  • Sensor knowledge representation
  • Annotation of sensor data
  • Innovative sensing devices
  • Innovative uses of existing sensors
  • Pervasive/Ubiquitous Computing

Dr. Gianluca Paravati
Dr. Valentina Gatteschi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Keywords

  • smart environments
  • human computer interaction
  • multimodal systems
  • innovative sensing devices
  • natural user interfaces
  • sensor data fusion
  • semantic technologies for multimodal integration of sensor data

Published Papers (23 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

663 KiB  
Editorial
Human-Computer Interaction in Smart Environments
by Gianluca Paravati and Valentina Gatteschi
Sensors 2015, 15(8), 19487-19494; https://doi.org/10.3390/s150819487 - 07 Aug 2015
Cited by 17 | Viewed by 10520
Abstract
Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting [...] Read more.
Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues. Full article
(This article belongs to the Special Issue HCI In Smart Environments)

Research

Jump to: Editorial, Review

14170 KiB  
Article
Augmented Robotics Dialog System for Enhancing Human–Robot Interaction
by Fernando Alonso-Martín, Aĺvaro Castro-González, Francisco Javier Fernandez de Gorostiza Luengo and Miguel Ángel Salichs
Sensors 2015, 15(7), 15799-15829; https://doi.org/10.3390/s150715799 - 03 Jul 2015
Cited by 17 | Viewed by 12289
Abstract
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory [...] Read more.
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human–robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot’s pro-activeness during a human–robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

4195 KiB  
Article
Gaze-Assisted User Intention Prediction for Initial Delay Reduction in Web Video Access
by Seungyup Lee, Juwan Yoo and Gunhee Han
Sensors 2015, 15(6), 14679-14700; https://doi.org/10.3390/s150614679 - 19 Jun 2015
Cited by 13 | Viewed by 6750
Abstract
Despite the remarkable improvement of hardware and network technology, the inevitable delay from a user’s command action to a system response is still one of the most crucial influence factors in user experiences (UXs). Especially for a web video service, an initial delay [...] Read more.
Despite the remarkable improvement of hardware and network technology, the inevitable delay from a user’s command action to a system response is still one of the most crucial influence factors in user experiences (UXs). Especially for a web video service, an initial delay from click action to video start has significant influences on the quality of experience (QoE). The initial delay of a system can be minimized by preparing execution based on predicted user’s intention prior to actual command action. The introduction of the sequential and concurrent flow of resources in human cognition and behavior can significantly improve the accuracy and preparation time for intention prediction. This paper introduces a threaded interaction model and applies it to user intention prediction for initial delay reduction in web video access. The proposed technique consists of a candidate selection module, a decision module and a preparation module that prefetches and preloads the web video data before a user’s click action. The candidate selection module selects candidates in the web page using proximity calculation around a cursor. Meanwhile, the decision module computes the possibility of actual click action based on the cursor-gaze relationship. The preparation activates the prefetching for the selected candidates when the click possibility exceeds a certain limit in the decision module. Experimental results show a 92% hit-ratio, 0.5-s initial delay on average and 1.5-s worst initial delay, which is much less than a user’s tolerable limit in web video access, demonstrating significant improvement of accuracy and advance time in intention prediction by introducing the proposed threaded interaction model. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

19429 KiB  
Article
Assessing Visual Attention Using Eye Tracking Sensors in Intelligent Cognitive Therapies Based on Serious Games
by Maite Frutos-Pascual and Begonya Garcia-Zapirain
Sensors 2015, 15(5), 11092-11117; https://doi.org/10.3390/s150511092 - 12 May 2015
Cited by 58 | Viewed by 9246
Abstract
This study examines the use of eye tracking sensors as a means to identify children’s behavior in attention-enhancement therapies. For this purpose, a set of data collected from 32 children with different attention skills is analyzed during their interaction with a set of [...] Read more.
This study examines the use of eye tracking sensors as a means to identify children’s behavior in attention-enhancement therapies. For this purpose, a set of data collected from 32 children with different attention skills is analyzed during their interaction with a set of puzzle games. The authors of this study hypothesize that participants with better performance may have quantifiably different eye-movement patterns from users with poorer results. The use of eye trackers outside the research community may help to extend their potential with available intelligent therapies, bringing state-of-the-art technologies to users. The use of gaze data constitutes a new information source in intelligent therapies that may help to build new approaches that are fully-customized to final users’ needs. This may be achieved by implementing machine learning algorithms for classification. The initial study of the dataset has proven a 0.88 (±0.11) classification accuracy with a random forest classifier, using cross-validation and hierarchical tree-based feature selection. Further approaches need to be examined in order to establish more detailed attention behaviors and patterns among children with and without attention problems. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

8723 KiB  
Article
An Informationally Structured Room for Robotic Assistance
by Tokuo Tsuji, Oscar Martinez Mozos, Hyunuk Chae, Yoonseok Pyo, Kazuya Kusaka, Tsutomu Hasegawa, Ken'ichi Morooka and Ryo Kurazume
Sensors 2015, 15(4), 9438-9465; https://doi.org/10.3390/s150409438 - 22 Apr 2015
Cited by 9 | Viewed by 7488
Abstract
The application of assistive technologies for elderly people is one of the most promising and interesting scenarios for intelligent technologies in the present and near future. Moreover, the improvement of the quality of life for the elderly is one of the first priorities [...] Read more.
The application of assistive technologies for elderly people is one of the most promising and interesting scenarios for intelligent technologies in the present and near future. Moreover, the improvement of the quality of life for the elderly is one of the first priorities in modern countries and societies. In this work, we present an informationally structured room that is aimed at supporting the daily life activities of elderly people. This room integrates different sensor modalities in a natural and non-invasive way inside the environment. The information gathered by the sensors is processed and sent to a centralized management system, which makes it available to a service robot assisting the people. One important restriction of our intelligent room is reducing as much as possible any interference with daily activities. Finally, this paper presents several experiments and situations using our intelligent environment in cooperation with our service robot. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

3534 KiB  
Article
Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller
by Vamsi Kiran Adhikarla, Jaka Sodnik, Peter Szolgay and Grega Jakus
Sensors 2015, 15(4), 8642-8663; https://doi.org/10.3390/s150408642 - 14 Apr 2015
Cited by 47 | Viewed by 10189
Abstract
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted [...] Read more.
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Graphical abstract

2316 KiB  
Article
Brain Process for Perception of the “Out of the Body” Tactile Illusion for Virtual Object Interaction
by Hye Jin Lee, Jaedong Lee, Chi Jung Kim, Gerard J. Kim, Eun-Soo Kim and Mincheol Whang
Sensors 2015, 15(4), 7913-7932; https://doi.org/10.3390/s150407913 - 01 Apr 2015
Cited by 10 | Viewed by 10635
Abstract
“Out of the body” tactile illusion refers to the phenomenon in which one can perceive tactility as if emanating from a location external to the body without any stimulator present there. Taking advantage of such a tactile illusion is one way to provide [...] Read more.
“Out of the body” tactile illusion refers to the phenomenon in which one can perceive tactility as if emanating from a location external to the body without any stimulator present there. Taking advantage of such a tactile illusion is one way to provide and realize richer interaction feedback without employing and placing actuators directly at all stimulation target points. However, to further explore its potential, it is important to better understand the underlying physiological and neural mechanism. As such, we measured the brain wave patterns during such tactile illusion and mapped out the corresponding brain activation areas. Participants were given stimulations at different levels with the intention to create veridical (i.e., non-illusory) and phantom sensations at different locations along an external hand-held virtual ruler. The experimental data and analysis indicate that both veridical and illusory sensations involve, among others, the parietal lobe, one of the most important components in the tactile information pathway. In addition, we found that as for the illusory sensation, there is an additional processing resulting in the delay for the ERP (event-related potential) and involvement by the limbic lobe. These point to regarding illusion as a memory and recognition task as a possible explanation. The present study demonstrated some basic understanding; how humans process “virtual” objects and the way associated tactile illusion is generated will be valuable for HCI (Human-Computer Interaction). Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Figure 1

3274 KiB  
Article
Adaptive Software Architecture Based on Confident HCI for the Deployment of Sensitive Services in Smart Homes
by Mario Vega-Barbas, Iván Pau, María Luisa Martín-Ruiz and Fernando Seoane
Sensors 2015, 15(4), 7294-7322; https://doi.org/10.3390/s150407294 - 25 Mar 2015
Cited by 17 | Viewed by 10089
Abstract
Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the [...] Read more.
Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Figure 1

2148 KiB  
Article
Design of a Mobile Brain Computer Interface-Based Smart Multimedia Controller
by Kevin C. Tseng, Bor-Shing Lin, Alice May-Kuen Wong and Bor-Shyh Lin
Sensors 2015, 15(3), 5518-5530; https://doi.org/10.3390/s150305518 - 06 Mar 2015
Cited by 13 | Viewed by 8321
Abstract
Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable [...] Read more.
Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user’s physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user’s physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user’s EEG feature and select music according his/her state. The relationship between the user’s state and music sorted by listener’s preference was also examined in this study. The experimental results show that real-time music biofeedback according a user’s EEG feature may positively improve the user’s attention state. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Figure 1

2544 KiB  
Article
Human Computer Interactions in Next-Generation of Aircraft Smart Navigation Management Systems: Task Analysis and Architecture under an Agent-Oriented Methodological Approach
by José M. Canino-Rodríguez, Jesús García-Herrero, Juan Besada-Portas, Antonio G. Ravelo-García, Carlos Travieso-González and Jesús B. Alonso-Hernández
Sensors 2015, 15(3), 5228-5250; https://doi.org/10.3390/s150305228 - 04 Mar 2015
Cited by 9 | Viewed by 9499
Abstract
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers [...] Read more.
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Figure 1

1271 KiB  
Article
Biosignal Analysis to Assess Mental Stress in Automatic Driving of Trucks: Palmar Perspiration and Masseter Electromyography
by Rencheng Zheng, Shigeyuki Yamabe, Kimihiko Nakano and Yoshihiro Suda
Sensors 2015, 15(3), 5136-5150; https://doi.org/10.3390/s150305136 - 02 Mar 2015
Cited by 52 | Viewed by 7522
Abstract
Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis [...] Read more.
Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Figure 1

2871 KiB  
Article
Adding Pluggable and Personalized Natural Control Capabilities to Existing Applications
by Fabrizio Lamberti, Andrea Sanna, Gilles Carlevaris and Claudio Demartini
Sensors 2015, 15(2), 2832-2859; https://doi.org/10.3390/s150202832 - 28 Jan 2015
Cited by 5 | Viewed by 5691
Abstract
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles [...] Read more.
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles and smartphones). However, to allow existing applications running on desktop computers to utilize natural interaction, significant re-design and re-coding efforts may be required. In this paper, a framework designed to transparently add multi-modal interaction capabilities to applications to which users are accustomed is presented. Experimental observations confirmed the effectiveness of the proposed framework and led to a classification of those applications that could benefit more from the availability of natural interaction modalities. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

826 KiB  
Article
Eye/Head Tracking Technology to Improve HCI with iPad Applications
by Asier Lopez-Basterretxea, Amaia Mendez-Zorrilla and Begoña Garcia-Zapirain
Sensors 2015, 15(2), 2244-2264; https://doi.org/10.3390/s150202244 - 22 Jan 2015
Cited by 29 | Viewed by 11420
Abstract
In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad’s front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control [...] Read more.
In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad’s front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

1701 KiB  
Article
Single-Sample Face Recognition Based on Intra-Class Differences in a Variation Model
by Jun Cai, Jing Chen and Xing Liang
Sensors 2015, 15(1), 1071-1087; https://doi.org/10.3390/s150101071 - 08 Jan 2015
Cited by 24 | Viewed by 7430
Abstract
In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the [...] Read more.
In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

2488 KiB  
Article
A Real-Time Pinch-to-Zoom Motion Detection by Means of a Surface EMG-Based Human-Computer Interface
by Jongin Kim, Dongrae Cho, Kwang Jin Lee and Boreom Lee
Sensors 2015, 15(1), 394-407; https://doi.org/10.3390/s150100394 - 29 Dec 2014
Cited by 27 | Viewed by 7933
Abstract
In this paper, we propose a system for inferring the pinch-to-zoom gesture using surface EMG (Electromyography) signals in real time. Pinch-to-zoom, which is a common gesture in smart devices such as an iPhone or an Android phone, is used to control the size [...] Read more.
In this paper, we propose a system for inferring the pinch-to-zoom gesture using surface EMG (Electromyography) signals in real time. Pinch-to-zoom, which is a common gesture in smart devices such as an iPhone or an Android phone, is used to control the size of images or web pages according to the distance between the thumb and index finger. To infer the finger motion, we recorded EMG signals obtained from the first dorsal interosseous muscle, which is highly related to the pinch-to-zoom gesture, and used a support vector machine for classification between four finger motion distances. The powers which are estimated by Welch’s method were used as feature vectors. In order to solve the multiclass classification problem, we applied a one-versus-one strategy, since a support vector machine is basically a binary classifier. As a result, our system yields 93.38% classification accuracy averaged over six subjects. The classification accuracy was estimated using 10-fold cross validation. Through our system, we expect to not only develop practical prosthetic devices but to also construct a novel user experience (UX) for smart devices. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

4695 KiB  
Article
Face Recognition System for Set-Top Box-Based Intelligent TV
by Won Oh Lee, Yeong Gon Kim, Hyung Gil Hong and Kang Ryoung Park
Sensors 2014, 14(11), 21726-21749; https://doi.org/10.3390/s141121726 - 18 Nov 2014
Cited by 21 | Viewed by 8352
Abstract
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can [...] Read more.
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer’s face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer’s face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

767 KiB  
Article
Adaptive Activity and Environment Recognition for Mobile Phones
by Jussi Parviainen, Jayaprasad Bojja, Jussi Collin, Jussi Leppänen and Antti Eronen
Sensors 2014, 14(11), 20753-20778; https://doi.org/10.3390/s141120753 - 03 Nov 2014
Cited by 22 | Viewed by 5669
Abstract
In this paper, an adaptive activity and environment recognition algorithm running on a mobile phone is presented. The algorithm makes inferences based on sensor and radio receiver data provided by the phone. A wide set of features that can be extracted from these [...] Read more.
In this paper, an adaptive activity and environment recognition algorithm running on a mobile phone is presented. The algorithm makes inferences based on sensor and radio receiver data provided by the phone. A wide set of features that can be extracted from these data sources were investigated, and a Bayesian maximum a posteriori classifier was used for classifying between several user activities and environments. The accuracy of the method was evaluated on a dataset collected in a real-life trial. In addition, comparison to other state-of-the-art classifiers, namely support vector machines and decision trees, was performed. To make the system adaptive for individual user characteristics, an adaptation algorithm for context model parameters was designed. Moreover, a confidence measure for the classification correctness was designed. The proposed adaptation algorithm and confidence measure were evaluated on a second dataset obtained from another real-life trial, where the users were requested to provide binary feedback on the classification correctness. The results show that the proposed adaptation algorithm is effective at improving the classification accuracy. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

5092 KiB  
Article
Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis
by Damir Krstinić, Ana Kuzmanić Skelin and Ivan Milatić
Sensors 2014, 14(11), 20112-20133; https://doi.org/10.3390/s141120112 - 27 Oct 2014
Cited by 14 | Viewed by 8038
Abstract
Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if [...] Read more.
Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

3367 KiB  
Article
Estimation of Eye Closure Degree Using EEG Sensors and Its Application in Driver Drowsiness Detection
by Gang Li and Wan-Young Chung
Sensors 2014, 14(9), 17491-17515; https://doi.org/10.3390/s140917491 - 18 Sep 2014
Cited by 38 | Viewed by 10740
Abstract
Currently, driver drowsiness detectors using video based technology is being widely studied. Eyelid closure degree (ECD) is the main measure of the video-based methods, however, drawbacks such as brightness limitations and practical hurdles such as distraction of the drivers limits its success. This [...] Read more.
Currently, driver drowsiness detectors using video based technology is being widely studied. Eyelid closure degree (ECD) is the main measure of the video-based methods, however, drawbacks such as brightness limitations and practical hurdles such as distraction of the drivers limits its success. This study presents a way to compute the ECD using EEG sensors instead of video-based methods. The premise is that the ECD exhibits a linear relationship with changes of the occipital EEG. A total of 30 subjects are included in this study: ten of them participated in a simple proof-of-concept experiment to verify the linear relationship between ECD and EEG, and then twenty participated in a monotonous highway driving experiment in a driving simulator environment to test the robustness of the linear relationship in real-life applications. Taking the video-based method as a reference, the Alpha power percentage from the O2 channel is found to be the best input feature for linear regression estimation of the ECD. The best overall squared correlation coefficient (SCC, denoted by r2) and mean squared error (MSE) validated by linear support vector regression model and leave one subject out method is r2 = 0.930 and MSE = 0.013. The proposed linear EEG-ECD model can achieve 87.5% and 70.0% accuracy for male and female subjects, respectively, for a driver drowsiness application, percentage eyelid closure over the pupil over time (PERCLOS). This new ECD estimation method not only addresses the video-based method drawbacks, but also makes ECD estimation more computationally efficient and easier to implement in EEG sensors in a real time way. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

2018 KiB  
Article
Assessment of Eye Fatigue Caused by 3D Displays Based on Multimodal Measurements
by Jae Won Bang, Hwan Heo, Jong-Suk Choi and Kang Ryoung Park
Sensors 2014, 14(9), 16467-16485; https://doi.org/10.3390/s140916467 - 04 Sep 2014
Cited by 49 | Viewed by 8257
Abstract
With the development of 3D displays, user’s eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities [...] Read more.
With the development of 3D displays, user’s eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

14175 KiB  
Article
Robust Arm and Hand Tracking by Unsupervised Context Learning
by Vincent Spruyt, Alessandro Ledda and Wilfried Philips
Sensors 2014, 14(7), 12023-12058; https://doi.org/10.3390/s140712023 - 07 Jul 2014
Cited by 8 | Viewed by 6641
Abstract
Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the [...] Read more.
Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the non-rigid character of the human hand. In this paper, we propose an unsupervised method to automatically learn the context in which a hand is embedded. This context includes the arm and any other object that coherently moves along with the hand. We introduce two novel methods to incorporate this context information into a probabilistic tracking framework, and introduce a simple yet effective solution to estimate the position of the arm. Finally, we show that our method greatly increases robustness against occlusion and cluttered background, without degrading tracking performance if no contextual information is available. The proposed real-time algorithm is shown to outperform the current state-of-the-art by evaluating it on three publicly available video datasets. Furthermore, a novel dataset is created and made publicly available for the research community. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Review

Jump to: Editorial, Research

1147 KiB  
Review
Augmenting the Senses: A Review on Sensor-Based Learning Support
by Jan Schneider, Dirk Börner, Peter Van Rosmalen and Marcus Specht
Sensors 2015, 15(2), 4097-4133; https://doi.org/10.3390/s150204097 - 11 Feb 2015
Cited by 77 | Viewed by 12733
Abstract
In recent years sensor components have been extending classical computer-based support systems in a variety of applications domains (sports, health, etc.). In this article we review the use of sensors for the application domain of learning. For that we analyzed 82 sensor-based [...] Read more.
In recent years sensor components have been extending classical computer-based support systems in a variety of applications domains (sports, health, etc.). In this article we review the use of sensors for the application domain of learning. For that we analyzed 82 sensor-based prototypes exploring their learning support. To study this learning support we classified the prototypes according to the Bloom’s taxonomy of learning domains and explored how they can be used to assist on the implementation of formative assessment, paying special attention to their use as feedback tools. The analysis leads to current research foci and gaps in the development of sensor-based learning support systems and concludes with a research agenda based on the findings. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

328 KiB  
Review
A Survey of Online Activity Recognition Using Mobile Phones
by Muhammad Shoaib, Stephan Bosch, Ozlem Durmaz Incel, Hans Scholten and Paul J.M. Havinga
Sensors 2015, 15(1), 2059-2085; https://doi.org/10.3390/s150102059 - 19 Jan 2015
Cited by 396 | Viewed by 24328
Abstract
Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous [...] Read more.
Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Back to TopTop