Next Article in Journal
Outage Analysis of Distributed Antenna Systems in a Composite Fading Channel with Correlated Shadowing
Next Article in Special Issue
Instrumented Wireless SmartInsole System for Mobile Gait Analysis: A Validation Pilot Study with Tekscan Strideway
Previous Article in Journal
Prototyping and Validation of MEMS Accelerometers for Structural Health Monitoring—The Case Study of the Pietratagliata Cable-Stayed Bridge
Previous Article in Special Issue
Trajectory-Assisted Municipal Agent Mobility: A Sensor-Driven Smart Waste Management System
Article Menu

Export Article

Open AccessArticle
J. Sens. Actuator Netw. 2018, 7(3), 31; https://doi.org/10.3390/jsan7030031

Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems

Graduate School of Engineering, Tohoku University, Aoba 6-6-05, Aramaki, Aoba-ku, Sendai 980-8579, Japan
Current address: Future Architect, Inc., 1-2-2 Osaki, Shinagawa-ku, Tokyo 141-0032, Japan.
*
Author to whom correspondence should be addressed.
Received: 30 June 2018 / Revised: 29 July 2018 / Accepted: 31 July 2018 / Published: 2 August 2018
(This article belongs to the Special Issue Wireless Sensor and Actuator Networks for Smart Cities)
Full-Text   |   PDF [2781 KB, uploaded 2 August 2018]   |  

Abstract

The development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each person’s activity and providing appropriate support based on that activity could be useful for such people. In this paper, we propose a novel fine-grained activity recognition method for user support systems that focuses on identifying the text at which a user is gazing, based on the idea that the content of the text is related to the activity of the user. It is necessary to keep in mind that the meaning of the text depends on its location. To tackle this problem, we propose the simultaneous use of a wearable device and fixed camera. To obtain the global location of the text, we perform image matching using the local features of the images obtained by these two devices. Then, we generate a feature vector based on this information and the content of the text. To show the effectiveness of the proposed approach, we performed activity recognition experiments with six subjects in a laboratory environment. View Full-Text
Keywords: activity recognition; eye tracker; fisheye camera; viewpoint information activity recognition; eye tracker; fisheye camera; viewpoint information
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Chiba, S.; Miyazaki, T.; Sugaya, Y.; Omachi, S. Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems. J. Sens. Actuator Netw. 2018, 7, 31.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
J. Sens. Actuator Netw. EISSN 2224-2708 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top