Next Article in Journal
A Highlight-Generation Method for Rendering Translucent Objects
Next Article in Special Issue
The Effects of Housing Environments on the Performance of Activity-Recognition Systems Using Wi-Fi Channel State Information: An Exploratory Study
Previous Article in Journal
A Novel Antenna for Partial Discharge Measurements in GIS Based on Magnetic Field Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Human–Machine Interface Based on Eye Tracking for Controlling and Monitoring a Smart Home Using the Internet of Things

by
Alexandre Bissoli
1,*,
Daniel Lavino-Junior
2,
Mariana Sime
3,
Lucas Encarnação
1,2 and
Teodiano Bastos-Filho
1,2
1
Postgraduate Program in Electrical Engineering, Federal University of Espirito Santo (UFES), Vitoria 29075-910, Brazil
2
Electrical Engineering Department, Federal University of Espirito Santo (UFES), Vitoria 29075-910, Brazil
3
Postgraduate Program in Biotechnology, Federal University of Espirito Santo (UFES), Vitoria 29047-105, Brazil
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(4), 859; https://doi.org/10.3390/s19040859
Submission received: 2 January 2019 / Revised: 11 February 2019 / Accepted: 12 February 2019 / Published: 19 February 2019
(This article belongs to the Special Issue Sensor Technology for Smart Homes)

Abstract

:
People with severe disabilities may have difficulties when interacting with their home devices due to the limitations inherent to their disability. Simple home activities may even be impossible for this group of people. Although much work has been devoted to proposing new assistive technologies to improve the lives of people with disabilities, some studies have found that the abandonment of such technologies is quite high. This work presents a new assistive system based on eye tracking for controlling and monitoring a smart home, based on the Internet of Things, which was developed following concepts of user-centered design and usability. With this system, a person with severe disabilities was able to control everyday equipment in her residence, such as lamps, television, fan, and radio. In addition, her caregiver was able to monitor remotely, by Internet, her use of the system in real time. Additionally, the user interface developed here has some functionalities that allowed improving the usability of the system as a whole. The experiments were divided into two steps. In the first step, the assistive system was assembled in an actual home where tests were conducted with 29 participants without disabilities. In the second step, the system was tested with online monitoring for seven days by a person with severe disability (end-user), in her own home, not only to increase convenience and comfort, but also so that the system could be tested where it would in fact be used. At the end of both steps, all the participants answered the System Usability Scale (SUS) questionnaire, which showed that both the group of participants without disabilities and the person with severe disabilities evaluated the assistive system with mean scores of 89.9 and 92.5, respectively.

1. Introduction

People with severe disabilities may have difficulties interacting with their home devices due to the limitations inherent to their disability. Simple activities such as turning on or off a lamp, fan, television, or any other equipment independently may even be impossible for this group of people. With technological advances in the field of sensors and actuators, in recent years some researchers have begun to transfer these technologies to improve the quality of life of people with disabilities, increasing their autonomy regarding the control of existing equipment in the environment [1,2,3,4].
Technologies dedicated to improving the lives of people with disabilities are known as assistive technologies. Assistive technology is an area of knowledge with an interdisciplinary characteristic which encompasses products, resources, methodologies, strategies, practices, and services that aim to promote the functionality related to the activity and participation of people with disabilities, inability, or reduced mobility, aiming at their autonomy, independence, quality of life, and social inclusion [5].
Although many works have been devoted to proposing new assistive technologies to improve the lives of people with disabilities [6,7,8,9,10,11,12,13,14,15], some studies have found that the abandonment of such technologies is quite high, reaching a rate of up to 30% [16,17,18,19]. The reasons for abandoning assistive technology are diverse, the most recurrent being that the user does not like the technology; the user is afraid to use the equipment; the user does not believe in the benefit of the device; the technology is not physically fit for the user; the price of the technology is very expensive; the user does not know how to use the equipment correctly; or the user disapproves of the equipment aesthetic factors [16,17,18,19].
Based on these facts, to avoid or at least reduce the abandonment of new technologies, in developing a new system, engineers should be concerned with developing a system that is useful to the user, i.e., that brings benefits; developing a system to suit the needs of the user; designing tests to validate the technology; evaluating the usability of the system; performing end-user testing; and testing the system outside the laboratory, i.e., testing the system where it will be actually used.
In order to increase the usability of an assistive system, it is also critical to consider the role of human–computer interaction (HCI). The concept of HCI refers to a discipline which studies information exchange between people and computers by using software. HCI mainly focuses on designing, assessing, and implementing interactive technological devices that cover the largest possible number of uses [20].
The ultimate goal of HCI is to make this interaction as efficient as possible, looking to minimize errors, increase satisfaction, lessen frustration, include users in development processes, work in multidisciplinary teams, and perform usability tests. In short, the goal is to make interaction between people and computers more productive [21].
New technologies have arisen with health-related developments which, by using HCI, meet the needs of different groups such as people with disabilities, the elderly, etc. [22,23]. Although these advances were unthinkable just a few years ago, they are gradually becoming a part of people’s daily lives [24,25].
Human–computer interaction and the need for a suitable user interface has been an important issue in modern life. Nowadays, products and technologies used by society have created concerns about computer technology. For this reason, researchers and designers are interested in the assessment of human and machine behavior, where the machines varying according to the system functionality and the system or product requirements [26].
This work aims to assist people with physical disability to pursue daily living autonomously, taking into account concepts of user-centered design and usability, in order to avoid the abandonment of the proposed system. To this end, we present a new useful assistive system based on eye tracking for controlling and monitoring a smart home, based on the Internet of Things. With this assistive system, a person with severe disabilities was able to control everyday equipment in her residence, such as lamps, television, fan, and radio, and the caregiver was able to remotely monitor the use of the system by the user in real time. In addition, the user interface developed here has some functionalities that allowed improving the usability of the system as a whole.
The subsequent sections of this work are organized as follows. We firstly review the related work and cover some smart homes from around the world in Section 2. In Section 3, we introduce our assistive system and its detailed design. Tests protocols, experimental results, and evaluations are reported in Section 4. Finally, we draw conclusions from our work in Section 5.

2. Related Work

In this section, we introduce the state-of-the-art related works by dividing the literature into three parts: (i) user-centered design and usability; (ii) eye tracking; and (iii) smart homes.

2.1. User-Centered Design (UCD)

A User-Centered Design (UCD) approach can be used in any type of product from the perspective of HCI design. UCD, also called Human-Centered Design (HCD), is a method that defines the needs, desires, limitations, services, or processes that serve the end-user of a product/system at all stages of a project. In other words, UCD is a multistage troubleshooting process that follows all product development requirements. UCD tries to optimize how the user can use the product/system, what they want or what they need, instead of changing the user’s behavior with the product/system [27].
The approach of UCD is to put human needs, resources, and behavior first, and then design technology to accommodate those needs, resources, and behaviors. It is necessary to understand the psychology and technology to start the design, which requires good communication, mainly between human and machine, indicating available options, the actual status, and the next step [28].
The term “interaction” from human–computer interaction (HCI) is a basis for designing or developing a user interface and an interaction between humans and machines. Preece et al. [29] define four basic activities of an interaction design: (i) identify needs and establish requirements; (ii) develop alternative projects; (iii) construct interactive versions of projects; and (iv) evaluate projects. They also describe three characteristics for interaction design: (i) focus on users; (ii) specific usability criteria; and (iii) iteration.
Regarding the user experience, Goodman et al. [30] claim that the process is not only to learn about the user experience with the technology, but also for designers to experience interacting in their own work. They report that user experience tests must be applied during the design, the approaches of which could be (i) reported approaches; (ii) anecdotal descriptions; and (iii) first-person research. In addition, Begum [31] presents the user interface (UI), proposing an extended UCD process that adds the “Understand” phase to the methods. The conventional steps of a UCD approach are (i) study, (ii) design, (iii) build, and (iv) evaluate; however, Begum [31] has extended it to (i) understand, (ii) study, (iii) design, (iv) construct, and (v) evaluate.

2.1.1. Usability

The definition of usability is when a product is used by specific users to achieve specific goals with effectiveness, efficiency, and satisfaction in a specific context of use [32]. Usability is more than just about whether users can perform tasks easily; it is also concerned with user satisfaction, where users will consider whether the product is engaging and aesthetically pleasing.
Usability testing is a technique in UCD which is used to evaluate a product by testing it with actual users. It allows developers to obtain direct feedback on how users interact with a product. Thus, through usability testing, it is possible to measure how well users perform against a reference and note if they meet predefined goals, also taking into account that users can do unexpected things during a test. Therefore, to create a design that works, it is helpful for developers to evaluate its Usability, i.e., to see what people do when they interact with a product [26,30].
Usability is then the outcome of a UCD process, which is a process that examines how and why a user will adopt a product and seeks to evaluate that use. That process is an iterative one and seeks to improve the design following each evaluation cycle continuously.

2.1.2. System Usability Scale (SUS)

The System Usability Scale (SUS) provides a reliable tool for measuring usability. It consists of a 10-item questionnaire with five response options which are scored by a 5-point Likert scale, ranging from “1—strongly disagree” to “5—strongly agree”. Originally created by Brooke [33], it allows researchers, engineers, and designers to evaluate a wide variety of products and services, including hardware, software, mobile devices, websites, and applications.
The 10 statements on the SUS are as follows:
(1)
I think that I would like to use this system frequently.
(2)
I found the system unnecessarily complex.
(3)
I thought the system was easy to use.
(4)
I think that I would need the support of a technical person to be able to use this system.
(5)
I found the various functions in this system were well integrated.
(6)
I thought there was too much inconsistency in this system.
(7)
I would imagine that most people would learn to use this system very quickly.
(8)
I found the system very cumbersome to use.
(9)
I felt very confident using the system.
(10)
I needed to learn a lot of things before I could get going with this system.
After completion of the questionnaires by the interviewees, the SUS score is calculated as follows:
  • For odd-numbered items: subtract 1 from the user response;
  • For even-numbered items: subtract the user responses from 5;
  • This scales all values from 0 to 4 (with 4 being the most positive response).
  • Add the converted responses for each user and multiply that total by 2.5. This converts the range of possible values from 0 to 100 instead of from 0 to 40.
Although the scores are from 0 to 100, these are not percentages and should be considered only in terms of their percentile ranking. Based on the research of Brooke [33], an SUS score above 68 would be considered “above average”, and anything below 68 is “below average”. However, the best way to interpret the results involves normalizing the scores to produce a percentile ranking. This process is similar to grading on a curve based on the distribution of all scores. To get an A (the top 10% of scores), a product needs to score above an 80.3. This is also the score in which users are more likely to be recommending the product to a friend. Scoring at the mean score of 68 gives the product a C, and anything below 51 is an F (putting the product in the bottom 15%).

2.2. Eye Tracking

Eye tracking is a technique to measure either the point of gaze (where someone is gazing) or the motion of an eye relative to the head. Eye tracking technology has applications in industry and research in visual systems [34,35,36,37], psychology [38,39], assistive technologies [40,41,42], marketing [43], as an input device for human–computer interaction [44,45,46,47], and in product and website design [48].
Generally, eye tracking measures the eyeball position and determines the gaze direction of a person. The eye movements can be tracked using different methods which can be categorized into four categories: (i) infrared oculography (IROG); (ii) scleral search coil (SSC); (iii) electro-oculography (EOG); and (iv) video-oculography (VOG). SSC measures the movement of a coil attached to the eye [24,49]; VOG/IROG carries out optical tracking without direct contact to the eye [42,47]; and EOG measures the electric potentials using electrodes placed around the eyes [50]. Currently, most of the eye tracking research for HCI is based on VOG, as it has minimized the invasiveness to the user to some degree [40].
The eye is one of main human input media, and about 80 to 90 percent of the outside world information is obtained from the human eye [51]. For communication from user to computer, the eye movements can be regarded as a pivotal real-time input medium, which is especially important for people with severe motor disability, who have limited anatomical sites to use to control input devices [52].
The research into eye tracking techniques in HCI is mainly focused on incorporating eye movements into the communication with the computer in a convenient and natural way. The most intuitive solution for incorporating eye movements into HCI is the use of an eye tracker directly connected to a manual input source, such as a mouse. By installing an eye tracker and using its x, y coordinate output stream as a virtual mouse, the movement of the user’s gaze directly causes the mouse cursor to move (eye mouse). In order to provide such appropriate interaction, several eye-tracking-based control systems have been developed, detailed as follows.
Chin et al. [53] proposed a cursor control system for computer users which integrated electromyogram signals from muscles on the face and point-of-gaze coordinates produced by an eye-gaze tracking system as inputs. Although it enabled a reliable click operation, it was slower than the control system that only used eye tracking, and its accuracy was low. Missimer and Betke [54] constructed a system that used the head position to control a mouse cursor and simulate left-click and right-click of the mouse by blinking the left or right eye. This system relied on the position of the user’s head to control the mouse cursor position. However, irregular movement of the user’s head affected the accuracy of the click function. Lupu et al. [41] proposed a communication system for people with disabilities which was based on a special device composed of a webcam mounted on the frame of a pair of glasses for image acquisition and processing. The device detected the eye movement, and the voluntary eye blinking was correlated with a pictogram or keyword selection, reflecting the patient’s needs. The drawback of this system was that the image processing algorithm could not accurately detect the acquired image (low resolution) and was not robust to light intensity. Later, to improve the reliability of the communication system, they proposed an eye tracking mouse system using video glasses and a new robust eye tracking algorithm based on the adaptive binary segmentation threshold of the acquired images [42].
Lately, several similar systems have also been developed [55,56], and the main concept of these systems is to capture images from a camera, either mounted on headgear worn by the user or mounted remotely, and extract the information from different eye features to determine the point of the gaze. Since, at the time the research was performed, commercial eye trackers were prohibitively expensive to use in HCI, all the aforementioned eye tracking control systems were proposed with self-designed hardware and software. It was difficult for these systems to achieve widespread adoption, as the software and hardware designs were closed source.

2.3. Smart Homes

There are many motivations to design and develop applications in smart homes, the main ones being independent living [3,7,9,10,11,57]; wellbeing [4,6,8,12,58]; efficient use of electricity [59,60,61,62,63,64,65,66,67,68,69,70,71,72]; and safety and security [73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88].
The expression “smart home” is used for a home environment with advanced technology that enables control and monitoring for its occupants and boosts independent living through sensors and actuators to control the environment or through wellness forecasting based on behavioral pattern generation and detection. A variety of smart home systems for assisted living environments have been proposed and developed, but there are, in fact, few homes that apply smart technologies. One of the main reasons for this is the complexity and varied design requirements associated with different domains of the home, which are communication [89,90,91,92,93,94,95], control [96,97,98,99,100,101,102,103,104,105,106,107,108,109,110], monitoring [111,112,113,114,115,116], entertainment [117,118,119,120], and residential and living spaces [121,122].
As an important component of the Internet of Things (IoT), smart homes serve users effectively by connecting them with devices based on IoT. Smart home technology based on IoT has changed human life by providing connectivity to everyone regardless of time and place [123,124]. Home automation systems have become increasingly sophisticated in recent years, as these systems provide infrastructure and methods to exchange all types of appliance information and services [125]. A smart home is a domain of IoT, which is the network of physical devices that provides electronic, sensor, software, and network connectivity inside a home.
There are many smart home systems across the world, most notably in Asia, Europe, and North America. In Asia, it is important to highlight Welfare Techno Houses [126,127], Smart House Ikeda [128], Robotics Room and Sensing Room [129], ActiveHome [130], ubiHome [131,132,133,134], Intelligent Sweet Home [135], UKARI Project and Ubiquitous Home [136,137], and Toyota Dream Home PAPI [138]. In Europe, there are comHOME [139], Gloucester Smart Home [140], CUSTODIAN Project [141], Siemens [142], myGEKKO [143], and MATCH [144]. In North America, there are Adaptive Smart House [145,146], Aware Home Research Initiative (AHRI) [147], MavHome Project [148,149], House_n (MIT House) [150,151], EasyLiving Project [152], Gator Tech Smart House [153], DOMUS Laboratory [154], Intelligent Home Project [155], CASAS [156,157], Smart Home Lab [158,159], AgingMO [160], and Home Monitoring at Rochester University [161].

3. Proposed Assistive System

The assistive system proposed here empowers people with severe physical disability and mitigates the limitations in everyday life with which they are confronted. The system aims at assisting people with physical disability to pursue daily living autonomously. In Figure 1, the local user is the person with disability who can control the equipment of his/her smart home through eye gaze using the device controller (gBox). At the same time, the caregiver (external user) can monitor the use of the system.
The proposed eye-gaze-tracking-based control system is a software application using a low-cost eye tracker (e.g., The Eye Tribe 1.0 and Tobii Pro). The application detects the user’s gaze with the “mouse cursor control” function provided by the eye tracker. The mouse cursor control allows users to redirect the mouse cursor to the gaze position. Therefore, the system realizes where the user is gazing according to the position of the mouse cursor. By gazing at the point for few seconds, the tool generates the corresponding event. This way, users can select and “click” the corresponding action. The eye tracker detects and tracks the coordinates of the user’s eye gaze on the screen; this made it possible to create applications that can be controlled in this way.
The eye tracker software is based on an open Application Programming Interface (API) that allows applications (clients) to communicate with the eye tracker server to obtain eye gaze coordinates. The communication is based on messages sent asynchronously, via Socket, using the Transmission Control Protocol (TCP).
It should be noted that to use this assistive system, it is not necessary to install any software in addition to the Internet browser, as the web application was developed to run in Internet browsers. It is only recommended to use the most up-to-date versions of well-known browsers, such as Google Chrome (preferable), Mozilla Firefox, or Internet Explorer.

3.1. System Architecture

Different systems for HCIs based on biological signals have been proposed with various techniques and applications [162]. Despite each work presenting unique properties, most of them fit into a common framework. Figure 2 shows the framework adopted in this system.
Observing the model, a loop structure can be recognized; it starts from the User, whose biological signals are the primary input, and ends with the environment that is affected by the actions of the system. Along this path, three main modules can be identified: the Biological Signal Translator module, the Server and Cloud module, and the Device Controller/Device module (gBox). Eventually, the loop is closed through user feedback of different kinds. The communication between the modules is bidirectional in order for a module to know the outcome of a command.

3.2. Connectivity

The web application was developed to work both online and offline. To open the online application and work in “online mode”, the user must simply enter the Internet browser and the domain where it is hosted (https://ntagbox.000webhostapp.com). The online application can be hosted on any HTTP server; the domain used in this work is provided free of charge by “Hostinger”, with limited, but sufficient, configurations. Because there is an external server, an Internet connection is necessary. In this case, the connection is via WebSocket (WS), which is the best option, as the connection between the application and the physical device is done over the Internet; in this way, the user commands are stored on the server instantly.
On the other hand, to use the offline application and work in the “offline mode”, it is only necessary to have the site files in a folder on the computer, run the “intex.html” file, and the browser will run the application. In this case, the connection is via AJAX, which is used when there is no Internet access in the user’s home (or if the Internet drops out temporarily); thus, the connection between the application and the physical device is made by the Intranet, and in this way, the user commands are stored temporarily on the computer until the connection via WS is established.
In order for the local and external connection to be implemented on the physical device, two ways of WEB communication were created (Figure 3): HTTP and WS. Once a command is launched from the application (APP), an internal mechanism identifies whether there is access to the external (Internet) server or not (local connection), and also identifies whether the physical device is properly connected to that server. The application has two communication clients related to the two communication channels. If the device is connected on the Internet, WS is used as the communication channel both in the application and in the device since it is capable of establishing an endless connection with the server, allowing the data to be sent bidirectionally and asynchronously. When there is no Internet connection and the application is in the same local network as the physical device, the communication channel used is HTTP, with an HTTP server on the physical device so that it has an IP that identifies it locally.
In the physical device, the data packets can be received by the two communication channels. However, they are directed to a single channel—the HANDLER—that handles the information contained therein. The HANDLER has the function of authenticating the data received and identifying the command contained therein so that the TRIGGER can be activated. The TRIGGER is the set of triggers and sensors responsible for interacting with the user’s devices. Some commands simply require changes in the internal variables of the system. For this purpose, it also has a small non-volatile SPIFFS memory module responsible for storing such variables, such as SSID and Wi-Fi password; user password; relay states; etc. It is also essential to keep the data stable if there is any power outage. If everything succeeds, the RESPONSE block returns the confirmation message to the application, returning to the input path of the packet.
If this input path is the WS client, the packet will be returned to the WS server in the cloud, which will store the command so that it is accessible via Internet, and finally send the confirmation to the WS client of the application. If this input path is the HTTP server, the device returns the response directly to the application, which places it in a rank to be sent to the server as soon as an Internet connection is established.

3.3. GlobalBox (gBox)

The GlobalBox (gBox) is the device controller module of the smart home. Figure 4 shows, in a simplified form, the main components of the gBox.
Before starting the application, the switch button must be turned on. With the box powered up, it is possible to receive information through the Wi-Fi module. This information is the command sent by the user through the user interface running on the computer. The information received by the Wi-Fi module is processed in the microcontroller, and then the actions corresponding to the received commands are performed, being able to turn the relays of the equipment on or off or send specific commands by the infrared (IR) emitter to control the functions of the TV or radio.

3.4. Wireless Infrared Communication

Infrared (IR) signals are essential to control some residential devices, such as TVs and radios. The gBox has an internal library with a set of IR protocols (the most used) already implemented, which assists in the task of modulation and demodulation of IR signals. The hardware consists of an IR detector that receives signals at 38 kHz, an IR emitter, and the microcontroller, which is responsible for treating and storing the signal in memory for later use.
To store the code of any remote control, it is necessary to send the read command from the application so the demodulator will be activated; then, the caregiver can press the button (towards the IR detector) that he/she wants to record, which transforms the received IR signals into codes that can be stored in memory (Figure 5).
To emit an IR signal, it is necessary to start the application whose command activates the signal modulator, which takes the codes stored in the memory of the microcontroller and transforms them into the original IR signal, sending it to the IR emitter module (Figure 6). The emitter module, when pointed towards the target device, acts in the same way as the remote control in its respective function.

3.5. User Interface

The most well-known strategy of eye tracking applications is using the gaze to perform tasks of pointing and selecting. However, the direct mapping of the gaze (more specifically, of fixations) to a command of system selection creates a problem, called the “Midas Touch”, in which a selection can be activated in any screen position observed by the user, whether they intended to do it or not.
Thus, after filtering the eye tracker data, the Midas Touch problem must be avoided by implementing mechanisms for the user to indicate when he/she really desires to perform a selected command. The first approach to this problem is to implement a dwell time, in which the selection of one option is done only after a time interval.
Figure 7a shows the initial screen when the system is off. When the user turns the system on by selecting the “Start” icon, the main menu shown in Figure 7b appears. In Figure 7b, the user has three options: (i) “Close”, to close the user interface of the system; (ii) “Start”, to open the home devices control menu; and (iii) “Config”, to open the configuration menu.
It is important for users to be able to turn the system on and off. That is why the interface presented in Figure 7b was included. If the user chooses “Close”, the system is closed and returns to the initial interface.
When the user selects the “Config” option in Figure 7b, the configuration menu shown in Figure 7c appears. In the configuration menu, the user can choose the icon size and the dwell time. There are three options for the icon size: (i) “Small”; (ii) “Medium”; and (iii) “Large”. There are four options for dwell time: (i) 0.5 s; (ii) 1.0 s; (iii) 2.0 s; and (iv) 3.0 s. After choosing any of the options, the interface automatically returns to the main menu with the new configuration saved.
When the user selects the “Start” option in Figure 7b, the control menu of the home devices shown in Figure 7d appears. In this menu, the user is presented with four options of devices to be turned on/off: a fan, TV, lamp, and radio. In addition, there is an option to close that menu to return to the main menu (Figure 7b) by selecting the center icon. The icon size on the interface and dwell time used are the ones that the user selected in the configuration menu.
When the device is turned on, the background of the icon becomes yellow, like “Lamp” and “Radio” are in Figure 7e. Fan and TV are turned off; thus, the background of the icons is white. After turning the desired device on or off, the user can give the command “Close” and turn the system off. This command closes the interface, but the system keeps the selected devices in their current state (on or off).
After selecting the TV icon, the interface displays the TV submenu shown in Figure 7f. In the TV submenu, the user can turn the TV on or off, change the channel “up” or “down”, increase or decrease the volume, and close the TV submenu. Is this last option, the TV submenu is closed and the interface returns to the home devices control menu, but the system keeps the TV in its current state (on or off).

3.6. Caregiver Interface

The gBox Central Management is accessible from the website (https://ntagbox.000webhostapp.com/). In the header of the caregiver interface are the top bar and titles. Before giving any command in the application, it is required to enter the password in the password field of the bar. After that, it is recommended to click “Update Status” so that the application synchronizes with the current states of the equipment. The “Start Application” button opens the user interface so the user can control the smart home with the eye tracker. On the right side of the bar, the connection status between the application and the physical devices is reported. The connection flag is independent of the access password to be updated. There are three possible connection status:
  • Connected via WS. This is the best connection. It occurs when the connection between the application and the physical device is done by Internet; that way, user commands are stored on the server instantly.
  • Connected via AJAX. This occurs when the connection between the application and the physical device is made by the Intranet, so the commands are stored temporarily on the user’s computer until a connection via WS is established.
  • Not Connected. This occurs when there is no connection between the application and the physical device. In this case, it is suggested to refresh the site and check the connections with the physical device.
The body of the application is composed of seven sections: (i) Last Commands; (ii) General Notifications; (iii) Infrared Remote Control Settings; (iv) Remote Actuation; (v) Data Acquisition; (vi) Change Password; and (vii) Change Wi-Fi Password, the details of which are as follows.
(i) “Last Commands”: In this section, the commands successfully performed are presented, as well as the date and time they occurred. To appear in this list, it is necessary to update the states after establishing a WS connection.
(ii) “General Notifications”: In this section, all the notifications from the application features are presented; for example, “updating status” or “the status are updated”.
(iii) “Infrared Remote Control Settings”: In this section, it is possible to update the IR commands by pressing the “READ” button and follow the instructions displayed in the general notifications section. In addition, hexadecimal codes and protocols of the buttons/commands of the IR control are presented.
(iv) “Remote Actuation”: Remote actuation can be used to control the smart home application by using the caregiver application.
(v) “Data Acquisition”: This section allows the download of the list of commands based on a specified time interval. The downloaded text file can be accessed in any text editor or spreadsheet analysis program.
(vi) “Change Password”: This section allows the change of the user password. It is necessary that the password field of the upper bar be correctly filled with the old password.
(vii) “Change Wi-Fi Password”: This functionality allows the change of the passwords of the SSID and of the Wi-Fi. It is necessary that the password field of the upper bar be correctly filled with the user password.

4. Tests, Results, and Discussion

In this section, we report the experiments, which were divided into two steps, and analyze and discuss the results. In the first step, the proposed assistive system was assembled in an actual home where tests were conducted with 29 participants (group of able-bodied participants). In the second step, the system was tested for seven days, with online monitoring, by a person with severe disability (end-user) in her own home, not only to increase her convenience and comfort, but also so that the system would be tested where it would in fact be used. The objective of this test was to explore the effectiveness of the assistive system, the ability of the participant to learn how to use the system, and the efficiency and the usability of the proposed user interface.
According to Resolution No. 466/12 of the National Health Council of Brazil, the research was approved by the Committee of Ethics in Human Beings Research of the Federal University of Espirito Santo (CEP/UFES) through opinion nº 2.020.868, of April 18, 2017.

4.1. Tests with a Group of Able-Bodied Participants

4.1.1. Pre-Test Preparation

Initially, we fully installed the system and tested all possible commands to verify that the system was working properly. Some errors were found and quickly corrected so that the system was considered to work perfectly before we started the tests with the participants.
Afterwards, a pilot test was conducted with one of the involved researchers to rehearse the procedure before conducting the study with the participants. The researcher completed all the data collection instruments. The problems encountered during the pilot test helped to identify changes before conducting the experiment with the participants.

4.1.2. Participants

To test the system, 29 healthy subjects (group of able-bodied participants) participated in the research with the assistive system in the smart home. The participants were 18 men and 11 women, all aged from 17 to 40 (average: 28) and height from 1.50 to 1.94 (average: 1.71). Some of the participants had had at least one experience with HCI through biological signals, but almost none of them had used eye tracking.
Of the total, 12 participants (#2, #8, #9, #11, #12, #14, #15, #16, #17, #22, #25, and #26) wear glasses; however, all of them performed the test without their glasses. In some cases, it was by preference of the participant himself/herself, but in most of the cases, their glasses had anti-reflective film which prevented, or at least disturbed, the infrared eye tracker passing through the lens of the glasses to obtain the correct position of the eyes of the participant. This is an important limitation of this study.

4.1.3. Experimental Sessions

The tests started by welcoming the participants and making them feel at ease. The participants were given an overview of the system and the test and were told that all their information will be kept private.
Each participant was seated on a couch in front of the laptop that contained the user interface, and the eye tracker was positioned properly pointing to his/her eyes (Figure 8).
After that, the participant performed the calibration stage of the eye tracker, following the manufacturer’s guidelines. Each participant performed the test over about five to ten minutes. It was required of the participants to use the system long enough so they could test all the functionality options available.
It is important to mention that the user was positioned facing a glass door that provided access to a balcony, with high incidence of sunlight. Despite this, the sunlight did not cause a problem in performing the experiments, showing the robustness of the eye tracker.
It can be seen in Figure 8 that the eye tracker was positioned towards the user’s eyes. In the course of the experiments, it is normal for a person to move his/her head a little, moving away from the location where the eye tracker was calibrated. We note that small vertical and horizontal position variations (around 5 to 10 cm) did not significantly interfere with the system performance. However, if the user’s eyes are completely out of range of the eye tracker, then he/she will not be able to send commands to the system. In this case, it may be necessary to reposition the eye tracker and calibrate it again. Considering that this system was designed to be used by people with severe disabilities, it is not expected that they will have wide head movements.
After the end of the session, the participant answered the SUS questionnaire and was encouraged to make suggestions.

4.1.4. Results and Discussion

Of the able-bodied participants, 3 opted for the small interface icon size option, 22 opted for medium, and 4 opted for large. As predicted in our previous work [163], most users opted for the medium size option. However, it is important to note that seven participants (24% of the total) preferred another size, thus showing the advantage of having options available.
For dwell time, 6 participants opted for 0.5 s, 18 opted for 1.0 s, 5 opted for 2.0 s, and no one chose the option of 3.0 s. Again, as predicted in our previous work [163], most users opted for the option of 1.0 s. However, it is important to note that 11 participants (38% of the total) preferred another time interval, thus showing the advantage of having this functionality available.
In fact, only 14 participants opted for the combination of medium icon size and 1.0 s dwell time. In other words, 15 participants (52% of the total) preferred another size or other time interval, and this shows the importance of having options to choose from in order to increase the usability of the system.
Regarding the usability, three participants gave a maximum SUS score, and the lowest result was 75. Thus, the overall mean was 89.9, with a standard deviation of 7.1. It is worth mentioning that products evaluated above 80.3 are in the top 10% of the scores. In fact, according to Brooke [33] and Bangor [164], products evaluated in the 90 point range are considered exceptional, products evaluated in the range of 80 points are considered good, and products evaluated in the range of 70 points are acceptable. Figure 9 presents the SUS score of each item evaluated by the participants regarding the assistive system.
The items regarding the available functionality, the complexity in using, and the confidence in functioning all received scores above 80. The lowest score obtained (79.3) was related to the sentence S1, which is about interest in using the system frequently. Many participants said that they would not have much interest in using this system, as the system was designed for a person with severe disability, which is not the case for the participant (able-bodied). This reinforced the need to test the system with people with severe disabilities.

4.2. Tests with a Person with Disabilities

4.2.1. Pre-Test Preparation

Initially, an interview was conducted by the occupational therapist of our research group to better understand the potential participant. At this point, relevant information was gathered about her disability and daily life, whether there was interest in participating in the study, in what activities she was most involved, and what tasks she would like to be able to do or have the assistance of the technology to execute.
After this first contact, the information was brought to the research group and the candidate was selected to participate in the experiments. A telephone appointment was made between the occupational therapist and the participant’s husband at their home, where the system would be used.

4.2.2. Participant Background

The participant is female and 38 years old. She was diagnosed in June 2012 with Wernicke’s Encephalopathy. Thus, the participant presents a lack of coordination of movements (ataxia) and extreme difficulty in balancing and walking. Her most difficult activities are those that require manual dexterity, such as typing on the computer, writing, using a mobile device, and handling the TV remote control. In addition, the participant has difficulty walking, considered practically impossible by her, or when necessary, causing enormous discomfort.

4.2.3. Experimental Sessions

To test the assistive system, it was firstly fully assembled and configured in the home of the end-user, who agreed to participate in the experiments (Figure 10). The participant was given an overview of the system and test. Before proceeding to the test, one of the researchers performed all possible commands to verify that the system was working properly.
The participant was seated on a couch in front of the laptop that contained the user interface, and the eye tracker was positioned properly, pointing towards her eyes (Figure 10). After that, the participant performed the calibration stage of the eye tracker, following the manufacturer’s guidelines. The participant performed the test over seven days. It was required of the participant to use the system long enough so she could test all the functionality options available.
After the end of the session, the participant answered the SUS questionnaire and was also encouraged to make suggestions.

4.2.4. Results and Discussion

According to information obtained in the interview with the participant, Friday and Saturday were the best days of the week for her to receive the researchers in her home, so she opted to install the system on a Friday (09/14/2018) and uninstall it on a Saturday (10/06/2018). Before the experiments, the participant informed us that she was not able to use the equipment on Sundays and Mondays, as on Sunday she usually receives many relatives in her house, and on Monday she spends the whole day away from home. In addition, the participant informed that she would need to make a trip for personal reasons during the experiment (from 09/23/2018 until 10/01/2018). All these situations put forward by the participant were considered pertinent, as they actually depict the daily life of a person with disability, revealing how technology needs to adapt to the person’s life. In addition, it was considered interesting to evaluate if the participant would use the system after she was away from home for a few days without using it. In many cases, assistive technology is abandoned, which did not happen with our system.
Table 1 shows the use of the assistive system by the participant, which shows the number of hours the system was used during the seven days of use.
It is important to note that the system was not only used, but used for several hours over several days, which was considered better than expected. The tests previously performed with the group of able-bodied participants of only 5 to 10 minutes—although important for evaluating the system with several users—were much less representative than the present test with the person with disabilities, which had a total duration of more than 20 hours. Another fact that corroborates this is the number of commands performed by the system over the days, shown in Table 1. Note that the system received a total of 542 commands and, as reported by the user, worked perfectly.
To better understand how the system was used by the participant, Table 2 summarizes the complete information about the commands received by the system throughout the complete test. The system was always used between 2:00 p.m. and 10:00 p.m., and more than 80% of the commands were sent between 1:00 p.m. and 4:00 p.m., indicating a user routine.
Regarding the usability (SUS), the user gave the maximum score for all the questions regarding the willingness to use the system, available functions, ease and confidence in using it, etc. So, focusing on the only feature that the user rated low, according to her, she needed to learn many new things to be able to use the system and so she gave a low score on that item. She believes that after using the system more, she will not need to learn so much more additional information.
Despite this, the user evaluated the system with an average of 92.5, which is quite high, even higher than the previous tests with the group of able-bodied participants, in which the overall mean was 89.9. In fact, according to Brooke [33] and Bangor [164], products evaluated in the 90 point range are considered exceptional.
As recommended by Begum [31], in this work the methodology of UCD for the design of new products was used in order to put the needs and desires of the user first. This way, it was possible to understand, study, design, build, and evaluate the system from the user’s point of view.

5. Conclusions

This work presented an assistive system, based on eye gaze tracking for controlling and monitoring a smart home using the Internet of Things, which was developed following concepts of user-centered design and usability. The proposed system allowed a user with disabilities to control everyday equipment in her residence (lamps, television, fan, and radio). In addition, the system could allow the caregiver to remotely monitor the use of the system by the user in real time. The user interface developed included some functionality to improve the usability of the system as a whole. The experiments were divided into two steps. In the first step, the assistive system was assembled in an actual home where tests were conducted with 29 participants (group of able-bodied participants). In the second step, the system was tested for seven days, with online monitoring, by a person with disability (end-user). The results of the SUS showed that the group of able-bodied participants and the end-user evaluated the assistive system with mean scores of 89.9 and 92.5, respectively, positioning the tool as exceptional.

Author Contributions

A.B. conducted the study and wrote this paper. A.B. and D.L.-J. developed the software and hardware. A.B., D.L.-J., and M.S. performed the tests with the participants. L.E. and T.B.-F. supervised and technically advised all the work and contributed to the editing of this manuscript.

Acknowledgments

The authors thank Google Inc. for the Google Research Awards for Latin America, the Federal University of Espirito Santo (UFES/Brazil), Fapes/Brazil, CNPq/Brazil, and CAPES/Brazil for their financial support and scholarships.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AJAXAsynchronous JavaScript and XML
APIApplication Programming Interface
CEP/UFESCommittee of Ethics in Human Beings Research of the Federal University of Espirito Santo
EEPROMElectrically-Erasable Programmable Read-Only Memory
EOGElectrooculography
HCIHuman-Computer Interaction
HMIHuman-Computer Interface
HTTPHypertext Transfer Protocol
IoTInternet of Things
IPInternet Protocol
IRInfrared
IROGInfrared Oculography
JSONJavaScript Object Notation
SPIFFSSerial Peripheral Interface Flash File System
SSCScleral Search Coil
SSIDService Set IDentifier
SUSSystem Usability Scale
TCPTransmission Control Protocol
UCDUser-Centered Design
UIUser Interface
VOGVideo-Oculography
WSWebSocket

References

  1. Tang, L.Z.W.; Ang, K.S.; Amirul, M.; Yusoff, M.B.M.; Tng, C.K.; Alyas, M.D.B.M.; Lim, J.G.; Kyaw, P.K.; Folianto, F. Augmented reality control home (ARCH) for disabled and elderlies. In Proceedings of the 2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), Singapore, 7–9 April 2015; pp. 1–2. [Google Scholar]
  2. Schwiegelshohn, F.; Wehner, P.; Rettkowski, J.; Gohringer, D.; Hubner, M.; Keramidas, G.; Antonopoulos, C.; Voros, N.S. A holistic approach for advancing robots in ambient assisted living environments. In Proceedings of the 2015 IEEE 13th International Conference on Embedded and Ubiquitous Computing, Porto, Portugal, 21–23 October 2015; pp. 140–147. [Google Scholar]
  3. Konstantinidis, E.I.; Antoniou, P.E.; Bamparopoulos, G.; Bamidis, P.D. A lightweight framework for transparent cross platform communication of controller data in ambient assisted living environments. Inf. Sci. (NY) 2015, 300, 124–139. [Google Scholar] [CrossRef]
  4. Boumpa, E.; Charalampou, I.; Gkogkidis, A.; Ntaliani, A.; Kokkinou, E.; Kakarountas, A. Assistive System for Elders Suffering of Dementia. In Proceedings of the 2018 IEEE 8th International Conference on Consumer Electronics, Berlin, Germany, 2–5 September 2018; pp. 1–4. [Google Scholar]
  5. Brazil Assistive Technology. In Proceedings of the National Undersecretary for the Promotion of the Rights of People with Disabilities; Technical Assistance Committee: Geneva, Switzerland, 2009.
  6. Elakkiya, J.; Gayathri, K.S. Progressive Assessment System for Dementia Care Through Smart Home. In Proceedings of the 2017 International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies (ICAMMAET), Chennai, India, 16–18 Febuary 2017; pp. 1–5. [Google Scholar]
  7. Rafferty, J.; Nugent, C.D.; Liu, J.; Chen, L. From Activity Recognition to Intention Recognition for Assisted Living within Smart Homes. IEEE Trans. Hum. -Mach. Syst. 2017, 47, 368–379. [Google Scholar] [CrossRef]
  8. Mizumoto, T.; Fornaser, A.; Suwa, H.; Yasumoto, K.; Cecco, M. De Kinect-based micro-behavior sensing system for learning the smart assistance with human subjects inside their homes. In Proceedings of the 2018 Workshop on Metrology for Industry 4.0 and IoT, Brescia, Italy, 16–18 April 2018; pp. 1–6. [Google Scholar]
  9. Daher, M.; El Najjar, M.E.; Diab, A.; Khalil, M.; Charpillet, F. Multi-sensory Assistive Living System for Elderly In-home Staying. In Proceedings of the 2018 International Conference on Computer and Applications (ICCA), Beirut, Lebanon, 25–26 August 2012; pp. 168–171. [Google Scholar]
  10. Ghayvat, H.; Mukhopadhyay, S.; Shenjie, B.; Chouhan, A.; Chen, W. Smart Home Based Ambient Assisted Living Recognition of Anomaly in the Activity of Daily Living for an Elderly Living Alone. In Proceedings of the 2018 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Houston, TX, USA, 14–17 May 2018; pp. 1–5. [Google Scholar]
  11. Wan, J.; Li, M.; Grady, M.J.O.; Hare, G.M.P.O.; Gu, X.; Alawlaqi, M.A.A.H. Time-bounded Activity Recognition for Ambient Assisted Living. IEEE Trans. Emerg. Top. Comput. 2018, 1–13. [Google Scholar] [CrossRef]
  12. Kristály, D.M.; Moraru, S.-A.; Neamtiu, F.O.; Ungureanu, D.E. Assistive Monitoring System Inside a Smart House. In Proceedings of the 2018 International Symposium in Sensing and Instrumentation in IoT Era (ISSI), Shanghai, China, 6–7 September 2018; pp. 1–7. [Google Scholar]
  13. Falcó, J.L.; Vaquerizo, E.; Artigas, J.I. A Multi-Collaborative Ambient Assisted Living Service Description Tool. Sensors 2014, 14, 9776–9812. [Google Scholar] [CrossRef] [Green Version]
  14. Valadão, C.; Caldeira, E.; Bastos-filho, T.; Frizera-neto, A.; Carelli, R. A New Controller for a Smart Walker Based on Human-Robot Formation. Sensors 2016, 16, 1116. [Google Scholar] [CrossRef]
  15. Kim, E.Y. Wheelchair Navigation System for Disabled and Elderly People. Sensors 2016, 16, 1806. [Google Scholar] [CrossRef]
  16. Holloway, C.; Dawes, H. Disrupting the world of Disability: The Next Generation of Assistive Technologies and Rehabilitation Practices. Healthc. Technol. Lett. 2016, 3, 254–256. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Cruz, D.M.C.D.; Emmel, M.L.G. Assistive Technology Public Policies in Brazil: A Study about Usability and Abandonment by People with Physical Disabilities. Rev. Fac. St. Agostinho 2015, 12, 79–106. [Google Scholar]
  18. Da Costa, C.R.; Ferreira, F.M.R.M.; Bortolus, M.V.; Carvalho, M.G.R. Assistive technology devices: Factors related to abandonment. Cad. Ter. Ocup. UFSCar 2015, 23, 611–624. [Google Scholar]
  19. Cruz, D.M.C.D.; Emmel, M.L.G. Use and abandonment of assistive technology for people with physical disabilities in Brazil. Available online: https://www.efdeportes.com/efd173/tecnologia-assistiva-com-deficiencia-fisica.htm (accessed on 19 February 2019).
  20. Marcos, P.M.; Foley, J. HCI (human computer interaction): Concepto y desarrollo. El Prof. La Inf. 2001, 10, 4–16. [Google Scholar] [CrossRef]
  21. Lamberti, F.; Sanna, A.; Carlevaris, G.; Demartini, C. Adding pluggable and personalized natural control capabilities to existing applications. Sensors 2015, 15, 2832–2859. [Google Scholar] [CrossRef] [PubMed]
  22. Bisio, I.; Lavagetto, F.; Marchese, M.; Sciarrone, A. Smartphone-Centric Ambient Assisted Living Platform for Patients Suffering from Co-Morbidities Monitoring. IEEE Commun. Mag. 2015, 53, 34–41. [Google Scholar] [CrossRef]
  23. Wang, K.; Shao, Y.; Shu, L.; Han, G.; Zhu, C. LDPA: A Local Data Processing Architecture in Ambient Assisted Living Communications. IEEE Commun. Mag. 2015, 53, 56–63. [Google Scholar] [CrossRef]
  24. Lopez-Basterretxea, A.; Mendez-Zorrilla, A.; Garcia-Zapirain, B. Eye/head tracking technology to improve HCI with iPad applications. Sensors 2015, 15, 2244–2264. [Google Scholar] [CrossRef]
  25. Butala, P.M.; Zhang, Y.; Thomas, D.C.; Wagenaar, R.C. Wireless System for Monitoring and Real-Time Classification of Functional Activity. In Proceedings of the 2012 Fourth International Conference Communication Systems Networks (COMSNETS 2012), Bangalore, India, 3–7 January 2012; pp. 1–5. [Google Scholar]
  26. Ahamed, M.M.; Bakar, Z.B.A. Triangle Model Theory for Enhance the Usability by User Centered Design Process in Human Computer Interaction. Int. J. Contemp. Comput. Res. 2017, 1, 1–7. [Google Scholar]
  27. Iivari, J.; Iivari, N. Varieties of user-centredness: An analysis of four systems development methods. J. Inf. Syst. 2011, 21, 125–153. [Google Scholar] [CrossRef]
  28. Norman, D.A. The Design of Everyday Things; Basic Books: New York, NY, USA, 2002. [Google Scholar]
  29. Preece, J.; Rogers, Y.; Sharp, H. Interaction Design-Beyond Human-Computer Interaction; John Wiley Sons: Hoboken, NJ, USA, 2002; pp. 168–186. [Google Scholar]
  30. Goodman, E.; Stolterman, E.; Wakkary, R. Understanding Interaction Design Practices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–11 May 2011. [Google Scholar]
  31. Begum, I. HCI and its Effective Use in Design and Development of Good User Interface. Int. J. Res. Eng. Technol. 2014, 3, 176–179. [Google Scholar]
  32. ISO 9241-210–Ergonomics of Human-System Interaction—Part 210: Human-Centred Design for Interactive Systems; ISO: Geneva, Switzerland, 2010.
  33. Brooke, J. System Usability Scale (SUS): A Quick-and-Dirty Method of System Evaluation User Information; Digital Equipment Co Ltd.: Reading, UK, 1986; pp. 1–7. [Google Scholar]
  34. Khan, R.S.A.; Tien, G.; Atkins, M.S.; Zheng, B.; Panton, O.N.M.; Meneghetti, A.T. Analysis of eye gaze: Do novice surgeons look at the same location as expert surgeons during a laparoscopic operation. Surg. Endosc. 2012, 26, 3536–3540. [Google Scholar] [CrossRef]
  35. Richstone, L.; Schwartz, M.J.; Seideman, C.; Cadeddu, J.; Marshall, S.; Kavoussi, L.R. Eye metrics as an objective assessment of surgical skill. Ann. Surg. 2010, 252, 177–182. [Google Scholar] [CrossRef]
  36. Wilson, M.; McGrath, J.; Vine, S.; Brewer, J.; Defriend, D.; Masters, R. Psychomotor control in a virtual laparoscopic surgery training environment: Gaze control parameters differentiate novices from experts. Surg. Endosc. 2010, 24, 2458–2464. [Google Scholar] [CrossRef]
  37. Wilson, M.R.; McGrath, J.S.; Vine, S.J.; Brewer, J.; Defriend, D.; Masters, R.S.W. Perceptual impairment and psychomotor control in virtual laparoscopic surgery. Surg. Endosc. 2011, 27, 2268–2274. [Google Scholar] [CrossRef] [PubMed]
  38. Sun, Q.; Xia, J.; Nadarajah, N.; Falkmer, T.; Foster, J.; Lee, H. Assessing drivers’ visual-motor coordination using eye tracking, GNSS and GIS: A spatial turn in driving psychology. J. Spat. Sci. 2016. [Google Scholar] [CrossRef]
  39. Moore, L.; Vine, S.J.; Cooke, A.M.; Ring, C. Quiet eye training expedites motor learning and aids performance under heightened anxiety: The roles of response programming and external attention. Psychophysiology 2012. [Google Scholar] [CrossRef] [PubMed]
  40. Eid, M.A.; Giakoumidis, N.; El-Saddik, A. A Novel Eye-Gaze-Controlled Wheelchair System for Navigating Unknown Environments: Case Study with a Person with ALS. IEEE Access 2016, 4, 558–573. [Google Scholar] [CrossRef]
  41. Lupu, R.G.; Ungureanu, F. Mobile Embedded System for Human Computer Communication in Assistive Technology. In Proceedings of the 2012 IEEE 8th International Conference on Intelligent Computer Communication and Processing, Cluj-Napoca, Romania, 30 August–1 September 2012; pp. 209–212. [Google Scholar]
  42. Lupu, R.G.; Ungureanu, F.; Siriteanu, V. Eye tracking mouse for human computer interaction. In Proceedings of the 2013 E-Health and Bioengineering Conference (EHB), Iasi, Romania, 21–23 November 2013; pp. 1–4. [Google Scholar]
  43. Scott, N.; Green, C.; Fairley, S. Investigation of the use of eye tracking to examine tourism advertising effectiveness. Curr. Issues Tour. 2015, 19, 634–642. [Google Scholar] [CrossRef]
  44. Cecotti, H. A Multimodal Gaze-Controlled Virtual Keyboard. IEEE Trans. Hum.-Mach. Syst. 2016, 46, 601–606. [Google Scholar] [CrossRef]
  45. Lewis, T.; Pereira, T.; Almeida, D. Smart scrolling based on eye tracking. Design an eye tracking mouse. Int. J. Comput. Appl. 2013, 80, 34–37. [Google Scholar]
  46. Nehete, M.; Lokhande, M.; Ahire, K. Design an Eye Tracking Mouse. Int. J. Adv. Res. Comput. Commun. Eng. 2013, 2, 1118–1121. [Google Scholar]
  47. Frutos-Pascual, M.; Garcia-Zapirain, B. Assessing visual attention using eye tracking sensors in intelligent cognitive therapies based on serious games. Sensors 2015, 15, 11092–11117. [Google Scholar] [CrossRef]
  48. Lee, S.; Yoo, J.; Han, G. Gaze-assisted user intention prediction for initial delay reduction in web video access. Sensors 2015, 15, 14679–14700. [Google Scholar] [CrossRef]
  49. Takemura, K.; Takahashi, K.; Takamatsu, J. Estimating 3-D Point-of-Regard in a Real Environment Using a Head-Mounted Eye-Tracking System. IEEE Trans. Hum.-Mach. Syst. 2014, 44, 531–536. [Google Scholar] [CrossRef]
  50. Manabe, H.; Fukumoto, M.; Yagi, T. Direct Gaze Estimation Based on Nonlinearity of EOG. IEEE Trans. Biomed. Eng. 2015, 62, 1553–1562. [Google Scholar] [CrossRef] [PubMed]
  51. Holzman, P.S.; Proctor, L.R.; Hughes, D.W. Eye-tracking patterns in schizophrenia. Science 1973, 181, 179–181. [Google Scholar] [CrossRef] [PubMed]
  52. Donaghy, C.; Thurtell, M.J.; Pioro, E.P.; Gibson, J.M.; Leigh, R.J. Eye movements in amyotrophic lateral sclerosis and its mimics: A review with illustrative cases. J. Neurol. Neurosurg. Psychiatry 2011, 82, 110–116. [Google Scholar] [CrossRef] [PubMed]
  53. Chin, C.A.; Barreto, A.; Cremades, J.G.; Adjouadi, M. Integrated electromyogram and eye-gaze tracking cursor control system for computer users with motor disabilities. J. Rehabil. Res. Dev. 2008, 45, 161–174. [Google Scholar] [CrossRef] [PubMed]
  54. Missimer, E.; Betke, M. Blink and wink detection for mouse pointer control. In Proceedings of the 3rd International Conference on Pervasive Technologies Related to Assistive Environments (PETRA ’10), Samos, Greece, 23–25 June 2010. [Google Scholar]
  55. Wankhede, S.; Chhabria, S. Controlling Mouse Cursor Using Eye Movement. Int. J. Appl. Innov. Eng. Manag. 2013, 1, 1–7. [Google Scholar]
  56. Meghna, S.M.A.; Kachan, K.L.; Baviskar, A. Head tracking virtual mouse system based on ad boost face detection algorithm. Int. J. Recent Innov. Trends Comput. Commun. 2016, 4, 921–923. [Google Scholar]
  57. Biswas, J.; Wai, A.A.P.; Tolstikov, A.; Kenneth, L.J.H.; Maniyeri, J.; Victor, F.S.F.; Lee, A.; Phua, C.; Jiaqi, Z.; Hoa, H.T.; et al. From context to micro-context–Issues and challenges in sensorizing smart spaces for assistive living. Procedia Comput. Sci. 2011, 5, 288–295. [Google Scholar] [CrossRef]
  58. Visutsak, P.; Daoudi, M. The Smart Home for the Elderly: Perceptions, Technologies and Psychological Accessibilities. In Proceedings of the 2017 XXVI International Conference on Information, Communication and Automation Technologies (ICAT), Sarajevo, Bosnia-Herzegovina, 26–28 October 2017; pp. 1–6. [Google Scholar]
  59. Beligianni, F.; Alamaniotis, M.; Fevgas, A.; Tsompanopoulou, P.; Bozanis, P.; Tsoukalas, L.H. An internet of things architecture for preserving privacy of energy consumption. In Proceedings of the Mediterranean Conference on Power Generation, Transmission, Distribution and Energy Conversion (MedPower 2016), Belgrade, Serbia, 6–9 November 2016; pp. 1–7. [Google Scholar]
  60. Bouchet, O.; Javaudin, J.; Kortebi, A.; El-Abdellaouy, H.; Lebouc, M.; Fontaine, F.; Jaffré, P.; Celeda, P.; Mayer, C.; Guan, H. ACEMIND: The smart integrated home network. In Proceedings of the 2014 International Conference on Intelligent Environments, Shanghai, China, 30 June–4 July 2014; pp. 1–8. [Google Scholar]
  61. Buhl, J.; Hasselkuß, M.; Suski, P.; Berg, H. Automating Behavior? An Experimental Living Lab Study on the Effect of Smart Home Systems and Traffic Light Feedback on Heating Energy Consumption. Curr. J. Appl. Sci. Technol. 2017, 22, 1–18. [Google Scholar] [CrossRef]
  62. Lim, Y.; Lim, S.Y.; Nguyen, M.D.; Li, C.; Tan, Y. Bridging Between universAAL and ECHONET for Smart Home Environment. In Proceedings of the 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, South Korea, 28 June–1 July 2017; pp. 56–61. [Google Scholar]
  63. Lin, C.M.; Chen, M.T. Design and Implementation of a Smart Home Energy Saving System with Active Loading Feature Identification and Power Management. In Proceedings of the 2017 IEEE 3rd International Future Energy Electronics Conference and ECCE Asia (IFEEC 2017–ECCE Asia), Kaohsiung, Taiwan, 3–7 June 2017; pp. 739–742. [Google Scholar]
  64. Soe, W.T.; Mpawenimana, I.; Difazio, M.; Belleudy, C.; Ya, A.Z. Energy Management System and Interactive Functions of Smart Plug for Smart Home. Int. J. Electr. Comput. Energ. Electron. Commun. Eng. 2017, 11, 824–831. [Google Scholar]
  65. Wu, X.; Hu, X.; Teng, Y.; Qian, S.; Cheng, R. Optimal integration of a hybrid solar-battery power source into smart home nanogrid with plug-in electric vehicle. J. Power Sources 2017, 363, 277–283. [Google Scholar] [CrossRef]
  66. Melhem, F.Y.; Grunder, O.; Hammoudan, Z. Optimization and Energy Management in Smart Home considering Photovoltaic, Wind, and Battery Storage System with Integration of Electric Vehicles. Can. J. Electr. Comput. Eng. 2017, 40, 128–138. [Google Scholar]
  67. Başol, G.; Güntürkün, R.; Başol, E. Smart Home Design and Application. World Wide J. Multidiscip. Res. Dev. 2017, 3, 53–58. [Google Scholar]
  68. Han, J.; Choi, C.; Park, W.; Lee, I.; Kim, S.; Architecture, A.S. Smart Home Energy Management System Including Renewable Energy Based on ZigBee and PLC. IEEE Trans. Consum. Electron. 2014, 60, 198–202. [Google Scholar] [CrossRef]
  69. Li, C.; Luo, F.; Chen, Y.; Xu, Z.; An, Y.; Li, X. Smart Home Energy Management with Vehicle-to-Home Technology. In Proceedings of the 2017 13th IEEE International Conference on Control & Automation (ICCA), Ohrid, Macedonia, 3–6 July 2017; pp. 136–142. [Google Scholar]
  70. Kiat, L.Y.; Barsoum, N. Smart Home Meter Measurement and Appliance Control. Int. J. Innov. Res. Dev. 2017, 6, 64–70. [Google Scholar] [CrossRef]
  71. Oliveira, E.L.; Alfaia, R.D.; Souto, A.V.F.; Silva, M.S.; Francês, C.R. SmartCoM: Smart Consumption Management Architecture for Providing a User-Friendly Smart Home based on Metering and Computational Intelligence. J. Microw. Optoelectron. Electromagn. Appl. 2017, 16, 732–751. [Google Scholar] [CrossRef]
  72. Kibria, M.G.; Jarwar, M.A.; Ali, S.; Kumar, S.; Chong, I. Web Objects Based Energy Efficiency for Smart Home IoT Service Provisioning. In Proceedings of the 2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN), Milan, Italy, 4–7 July 2017; pp. 55–60. [Google Scholar]
  73. Datta, S.K. Towards Securing Discovery Services in Internet of Things. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–11 January 2016; pp. 506–507. [Google Scholar]
  74. Huth, C.; Duplys, P.; Tim, G. Secure Software Update and IP Protection for Untrusted Devices in the Internet of Things Via Physically Unclonable Functions. In Proceedings of the 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), Sydney, Australia, 14–18 March 2016; pp. 1–6. [Google Scholar]
  75. Rahman, R.A.; Shah, B. Security analysis of IoT protocols: A focus in CoAP. In Proceedings of the 2016 3rd MEC International Conference on Big Data and Smart City (ICBDSC), Muscat, Oman, 15–16 March 2016; pp. 1–7. [Google Scholar]
  76. Rajiv, P.; Raj, R.; Chandra, M. Email based remote access and surveillance system for smart home infrastructure. Perspect. Sci. 2016, 8, 459–461. [Google Scholar] [CrossRef] [Green Version]
  77. Wurm, J.; Hoang, K.; Arias, O.; Sadeghi, A.; Jin, Y. Security Analysis on Consumer and Industrial IoT Devices. In Proceedings of the 2016 21st Asia and South Pacific Design Automation Conference (ASP-DAC), Macau, China, 25–28 January 2016; pp. 519–524. [Google Scholar]
  78. Arabo, A. Cyber Security Challenges within the Connected Home Ecosystem Futures. Procedia Comput. Sci. 2015, 61, 227–232. [Google Scholar] [CrossRef] [Green Version]
  79. Golait, S.S. 3-Level Secure Kerberos Authentication for Smart Home Systems Using IoT. In Proceedings of the 2015 1st International Conference on Next Generation Computing Technologies (NGCT), Dehradun, India, 4–5 September 2015; pp. 262–268. [Google Scholar]
  80. Han, J.H.; Kim, J. Security Considerations for Secure and Trustworthy Smart Home System in the IoT Environment. In Proceedings of the 2015 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, South Korea, 28–30 October 2015; pp. 1116–1118. [Google Scholar]
  81. Huth, C.; Zibuschka, J.; Duplys, P.; Tim, G. Securing Systems on the Internet of Things via Physical Properties of Devices and Communications. In Proceedings of the 2015 Annual IEEE Systems Conference (SysCon) Proceedings, Vancouver, BC, Canada, 13–16 April 2015; pp. 8–13. [Google Scholar]
  82. Jacobsson, A.; Davidsson, P. Towards a Model of Privacy and Security for Smart Homes. In Proceedings of the 2015 IEEE 2nd World Forum Internet Things, Milan, Italy, 4–16 December 2015; pp. 727–732. [Google Scholar]
  83. Peng, Z.; Kato, T.; Takahashi, H.; Kinoshita, T. Intelligent Home Security System Using Agent-based IoT Devices. In Proceedings of the 2015 IEEE 4th Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 27–30 October 2015; pp. 313–314. [Google Scholar]
  84. Peretti, G.; Lakkundit, V.; Zorzi, M. BlinkToSCoAP: An End-to-End Security Framework for the Internet of Things. In Proceedings of the 2015 7th International Conference on Communication Systems and Networks (COMSNETS), Bangalore, India, 6–10 January 2015; pp. 1–6. [Google Scholar]
  85. Santoso, F.K.; Vun, N.C.H. Securing IoT for Smart Home System. In Proceedings of the 2015 International Symposium on Consumer Electronics (ISCE), Madrid, Spain, 24–26 June 2015; pp. 1–2. [Google Scholar]
  86. Schiefer, M. Smart Home Definition and Security Threats. In Proceedings of the 2015 Ninth International Conference on IT Security Incident Management & IT Forensics, Magdeburg, Germany, 18–20 May 2015; pp. 114–118. [Google Scholar]
  87. Alohali, B.; Merabti, M.; Kifayat, K. A Secure Scheme for a Smart House Based on Cloud of Things (CoT). In Proceedings of the 2014 6th Computer Science and Electronic Engineering Conference (CEEC), Colchester, UK, 25–26 September 2014; pp. 115–120. [Google Scholar]
  88. Sivaraman, V.; Gharakheili, H.H.; Vishwanath, A.; Boreli, R.; Mehani, O. Network-Level Security and Privacy Control for Smart-Home IoT Devices. In Proceedings of the 2015 IEEE 11th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Abu Dhabi, United Arab, 19–21 October 2015; pp. 163–167. [Google Scholar]
  89. Amadeo, M.; Briante, O.; Campolo, C.; Molinaro, A.; Ruggeri, G. Information-centric networking for M2M communications: Design and deployment. Comput. Commun. 2016, 89–90, 105–106. [Google Scholar] [CrossRef]
  90. Li, H.; Seed, D.; Flynn, B.; Mladin, C.; Di Girolamo, R. Enabling Semantics in an M2M/IoT Service Delivery Platform. In Proceedings of the 2016 IEEE Tenth International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, 4–6 Febuary 2016; pp. 206–213. [Google Scholar]
  91. Rizopoulos, C. Implications of Theories of Communication and Spatial Behavior for the Design of Interactive Environments. In Proceedings of the 2011 Seventh International Conference on Intelligent Environments, Nottingham, UK, 25–28 July 2011; pp. 286–293. [Google Scholar]
  92. Waltari, O.; Kangasharju, J. Content-Centric Networking in the Internet of Things. In Proceedings of the 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 9–12 January 2016; pp. 73–78. [Google Scholar]
  93. Seo, D.W.; Kim, H.; Kim, J.S.; Lee, J.Y. Hybrid reality-based user experience and evaluation of a context-aware smart home. Comput. Ind. 2016, 76, 11–23. [Google Scholar] [CrossRef]
  94. Amadeo, M.; Campolo, C.; Iera, A.; Molinaro, A. Information Centric Networking in IoT scenarios: The Case of a Smart Home. In Proceedings of the 2015 IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015; pp. 648–653. [Google Scholar]
  95. Elkhodr, M.; Shahrestani, S.; Cheung, H. A Smart Home Application based on the Internet of Things Management Platform. In Proceedings of the 2015 IEEE International Conference on Data Science and Data Intensive Systems, Sydney, Australia, 11–13 December 2015; pp. 491–496. [Google Scholar]
  96. Bhide, V.H.; Wagh, S. I-learning IoT: An intelligent self learning system for home automation using IoT. In Proceedings of the 2015 International Conference on Communications and Signal Processing (ICCSP), Melmaruvathur, India, 2–4 April 2015; pp. 1763–1767. [Google Scholar]
  97. Bian, J.; Fan, D.; Zhang, J. The new intelligent home control system based on the dynamic and intelligent gateway. In Proceedings of the 2011 4th IEEE International Conference on Broadband Network and Multimedia Technology, Shenzhen, China, 28–30 October 2011; pp. 526–530. [Google Scholar]
  98. Cheuque, C.; Baeza, F.; Marquez, G.; Calderon, J. Towards to responsive web services for smart home LED control with Raspberry Pi. A first approach. In Proceedings of the 2015 34th International Conference of the Chilean Computer Science Society (SCCC), Santiago, Chile, 9–13 November 2015. [Google Scholar]
  99. Hasibuan, A.; Mustadi, M.; Syamsudin, D.I.E.Y.; Rosidi, I.M.A. Design and Implementation of Modular Home Automation Based on Wireless Network, REST API and WebSocket. In Proceedings of the Internastional Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Nusa Dua, Indonesia, 9–12 November 2015; pp. 9–12. [Google Scholar]
  100. Hernandez, M.E.P.; Reiff-Marganiec, S. Autonomous and self controlling smart objects for the future Internet. In Proceedings of the 2015 3rd International Conference on Future Internet of Things and Cloud, Rome, Italy, 24–26 August 2015; pp. 301–308. [Google Scholar]
  101. Jacobsson, A.; Boldt, M.; Carlsson, B. A risk analysis of a smart home automation system. Futur. Gener. Comput. Syst. 2016, 56, 719–733. [Google Scholar] [CrossRef] [Green Version]
  102. Jacobsson, A.; Boldt, M.; Carlsson, B. On the risk exposure of smart home automation systems. In Proceedings of the 2014 International Conference on Future Internet of Things and Cloud, Barcelona, Spain, 27–29 August 2014; pp. 183–190. [Google Scholar]
  103. Lazarevic, I.; Sekulic, M.; Savic, M.S.; Mihic, V. Modular home automation software with uniform cross component interaction based on services. In Proceedings of the 2015 IEEE 5th International Conference on Consumer Electronics Berlin (ICCE-Berlin), Berlin, Germany, 6–9 September 2015. [Google Scholar]
  104. Lee, K.M.; Teng, W.G.; Hou, T.W. Point-n-Press: An Intelligent Universal Remote Control System for Home Appliances. IEEE Trans. Autom. Sci. Eng. 2016, 13, 1308–1317. [Google Scholar] [CrossRef]
  105. Yeazell, S.C. Teaching Supplemental Jurisdiction. Indiana Law J. 1998, 74, 222–249. [Google Scholar]
  106. Miclaus, A.; Riedel, T.; Beigl, M. Computing Corner. Teach. Stat. 1990, 12, 25–26. [Google Scholar]
  107. Mittal, Y.; Toshniwal, P.; Sharma, S.; Singhal, D.; Gupta, R.; Mittal, V.K. A voice-controlled multi-functional Smart Home Automation System. In Proceedings of the 2015 Annual IEEE India Conference (INDICON), New Delhi, India, 17–20 December 2015; pp. 1–6. [Google Scholar]
  108. Moravcevic, V.; Tucic, M.; Pavlovic, R.; Majdak, A. An approach for uniform representation and control of ZigBee devices in home automation software. In Proceedings of the 2015 IEEE 5th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), Berlin, Germany, 6–9 September 2015; pp. 237–239. [Google Scholar]
  109. Papp, I.; Velikic, G.; Lukac, N.; Horvat, I. Uniform representation and control of Bluetooth Low Energy devices in home automation software. In Proceedings of the 2015 IEEE 5th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), Berlin, Germany, 6–9 September 2015; pp. 366–368. [Google Scholar]
  110. Ryan, J.L. Home automation. IEE Rev. 1988, 34, 355. [Google Scholar] [CrossRef]
  111. Lee, Y.-T.; Hsiao, W.-H.; Huang, C.-M.; Chou, S.-C.T. An Integrated Cloud-Based Smart Home Management System with Community Hierarchy. IEEE Trans. Consum. Electron. 2016, 62, 1–9. [Google Scholar] [CrossRef]
  112. Kanaris, L.; Kokkinis, A.; Fortino, G.; Liotta, A.; Stavrou, S. Sample Size Determination Algorithm for fingerprint-based indoor localization systems. Comput. Netw. 2016, 101, 169–177. [Google Scholar] [CrossRef]
  113. Mano, L.Y.; Faiçal, B.S.; Nakamura, L.H.V.; Gomes, P.H.; Libralon, G.L.; Meneguete, R.I.; Filho, G.P.R.; Giancristofaro, G.T.; Pessin, G.; Krishnamachari, B.; et al. Exploiting IoT technologies for enhancing Health Smart Homes through patient identification and emotion recognition. Comput. Commun. 2016, 89–90, 178–190. [Google Scholar] [CrossRef]
  114. Zanjal, S.V.; Talmale, G.R. Medicine Reminder and Monitoring System for Secure Health Using IOT. Procedia Comput. Sci. 2016, 78, 471–476. [Google Scholar] [CrossRef] [Green Version]
  115. Bhole, M.; Phull, K.; Jose, A.; Lakkundi, V. Delivering Analytics Services for Smart Homes. In Proceedings of the 2015 IEEE Conference on Wireless Sensors (ICWiSe), Melaka, Malaysia, 24–26 August 2015; pp. 28–33. [Google Scholar]
  116. Thomas, S.; Bourobou, M.; Yoo, Y. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm. Sensors 2015, 15, 11953–11971. [Google Scholar] [Green Version]
  117. Scholz, M.; Flehmig, G.; Schmidtke, H.R.; Scholz, G.H. Powering Smart Home intelligence using existing entertainment systems. In Proceedings of the 2011 Seventh International Conference on Intelligent Environments, Nottingham, UK, 25–28 July 2011; Volume 970, pp. 230–237. [Google Scholar]
  118. Jiang, S.; Peng, J.; Lu, Z.; Jiao, J. 802.11ad Key Performance Analysis and Its Application in Home Wireless Entertainment. In Proceedings of the 2014 IEEE 17th International Conference on Computational Science and Engineering, Chengdu, China, 19–21 December 2014; Volume 5, pp. 1595–1598. [Google Scholar]
  119. Technologies, I. Analyzing Social Networks Activities to Deploy Entertainment Services in HRI-based Smart Environments. In Proceedings of the 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Naples, Italy, 9–12 July 2017; pp. 1–6. [Google Scholar]
  120. Hossain, M.A.; Alamri, A.; Parra, J. Context-Aware Elderly Entertainment Support System in Assisted Living Environment. In Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), San Jose, CA, USA, 15–19 July 2013; pp. 1–6. [Google Scholar]
  121. Dooley, J.; Henson, M.; Callaghan, V.; Hagras, H.; Al-Ghazzawi, D.; Malibari, A.; Al-Haddad, M.; Al-ghamdi, A.A. A Formal Model For Space Based Ubiquitous Computing. In Proceedings of the 2011 Seventh International Conference on Intelligent Environments, Nottingham, UK, 25–28 July 2011; pp. 294–299. [Google Scholar]
  122. De Morais, W.O.; Wickström, N. A “Smart Bedroom” as an Active Database System. In Proceedings of the 2013 9th International Conference on Intelligent Environments, Athens, Greece, 16–17 July 2013; pp. 250–253. [Google Scholar]
  123. Gaikwad, P.P.; Gabhane, J.P.; Golait, S.S. Survey based on Smart Homes System Using Internet-of-Things. In Proceedings of the 2015 International Conference on Computation of Power, Energy, Information and Communication (ICCPEIC), Chennai, India, 22–23 April 2015; pp. 330–335. [Google Scholar]
  124. Samuel, S.S.I. A Review of Connectivity Challenges in IoT-Smart Home. In Proceedings of the 2016 3rd MEC International Conference on Big Data and Smart City (ICBDSC), Muscat, Oman, 15–16 March 2016; pp. 1–4. [Google Scholar]
  125. Kim, Y.; Lee, S.; Jeon, Y.; Chong, I.; Lee, S.H. Orchestration in distributed web-of-objects for creation of user-centered iot service capability. In Proceedings of the 2013 Fifth International Conference on Ubiquitous and Future Networks (ICUFN), Da Nang, Vietnam, 2–5 July 2013. [Google Scholar]
  126. Tamura, T.; Kawarada, A.; Nambu, M.; Tsukada, A.; Sasaki, K.; Yamakoshi, K. E-Healthcare at an Experimental Welfare Techno House in Japan. Open Med. Inform. J. 2007, 1–7. [Google Scholar] [CrossRef] [PubMed]
  127. Mohktar, M.S.; Sukor, J.A.; Redmond, S.J.; Basilakis, J.; Lovell, N.H. Effect of Home Telehealth Data Quality on Decision Support System Performance. Proc. Comput. Sci. 2015, 64, 352–359. [Google Scholar] [CrossRef] [Green Version]
  128. Matsuoka, K. Aware Home Understanding Life Activities. In Proceedings of the 2nd International Conference Smart Homes Health Telematics (ICOST 2004), Ames, IA, USA, 28 June–2 July 2004; Volume 14, pp. 186–193. [Google Scholar]
  129. Mori, T.; Noguchi, H.; Takada, A.; Sato, T. Sensing room environment: Distributed sensor space for measurement of human dialy behavior. Ransaction Soc. Instrum. Control Eng. 2006, 1, 97–103. [Google Scholar]
  130. Lee, H.; Kim, Y.-T.; Jung, J.-W.; Park, K.-H.; Kim, D.-J.; Bang, B.; Bien, Z.Z. A 24-hour health monitoring system in a smart house. Gerontechnol. J. 2008, 7, 22–35. [Google Scholar] [CrossRef]
  131. Ubihome|Mind the Gap. Available online: https://mindthegap.agency/client/ubihome (accessed on 30 November 2018).
  132. ubiHome. Available online: http://ubihome.me/Home (accessed on 30 November 2018).
  133. Oh, Y.; Lee, S.; Woo, W. User-Centric Integration of Contexts for a Unified Context-Aware Application Model. CEUR Workshop Proceeding. 2005, pp. 9–16. Available online: https://www.semanticscholar.org/paper/User-centric-Integration-of-Contexts-for-A-Unified-Oh-Lee/0b36ca616e47b66487372625df44aeb1d919fe48 (accessed on 29 November 2018).
  134. Oh, Y.; Woo, W. A Unified Application Service Model for ubiHome by Exploiting Intelligent Context-Awareness. In International Symposium on Ubiquitious Computing Systems; Springer: Berlin, Germnay, 2004; pp. 192–202. [Google Scholar]
  135. Bien, Z.Z.; Park, K.-H.; Bang, W.-C.; Stefanov, D.H. LARES: An Intelligent Sweet Home for Assisting the Elderly and the Handicapped. 1st International Conference Smart Homes Health Telematics, Assistive Technology, 2003. pp. 151–158. Available online: https://edurev.in/studytube/LARES-An-Intelligent-Sweet-Home-for-Assisting-the-/bf43423b-0daf-42b3-bf3f-54c8fa4e2bbd_p (accessed on 29 November 2018).
  136. Minoh, M. Experiences in UKARI Project. J. Natl. Inst. Inf. Commun. Technol. 2007, 54, 147–154. [Google Scholar]
  137. Tetsuya, F.; Hirotada, U.; Michihiko, M. A Looking-for-Objects Service in Ubiquious Home. J. Natl. Inst. Inf. Commun. Technol. 2007, 54, 175–181. [Google Scholar]
  138. Toyota Dream House PAPI. Available online: http://tronweb.super-nova.co.jp/toyotadreamhousepapi.html (accessed on 30 November 2018).
  139. Junestrand, S.; Keijer, U.; Tollmar, K. Private and Public Digital Domestic Spaces. Int. J. Hum. Comput. Stud. 2001, 54, 753–778. [Google Scholar] [CrossRef]
  140. Orpwood, R.; Gibbs, C.; Adlam, T.; Faulkner, R.; Meegahawatte, D. The Gloucester Smart House for People with Dementia—User-Interface Aspects. In Designing a More Inclusive World; Springer: Berlin, Germany, 2004; pp. 237–245. [Google Scholar]
  141. Davis, G.; Wiratunga, N.; Taylor, B.; Craw, S. Matching smarthouse technology to needs of the elderly and disabled. In Proceedings of the Workshop Proceedings of the 5th International Conference on Case-Based Reasoning, Trondheim, Norway, 23–26 June 2003; pp. 29–36. [Google Scholar]
  142. Mehrotra, S.; Dhande, R. Smart cities and smart homes: From realization to reality. In Proceedings of the 2015 International Conference on Green Computing and Internet of Things (ICGCIoT), Noida, India, 8–10 October 2015; pp. 1236–1239. [Google Scholar]
  143. myGEKKO. Available online: https://www2.my-gekko.com/en/ (accessed on 30 November 2018).
  144. MATCH–Mobilising Advanced Technologies for Care at Home. Available online: http://www.cs.stir.ac.uk/~kjt/research/match/main/main.html (accessed on 30 November 2018).
  145. The Adaptive House Boulder, Colorado. Available online: http://www.cs.colorado.edu/~mozer/index.php?dir=/Research/Projects/Adaptive house/ (accessed on 29 November 2018).
  146. Lindsey, R.; Daluiski, A.; Chopra, S.; Lachapelle, A.; Mozer, M.; Sicular, S.; Hanel, D.; Gardner, M.; Gupta, A.; Hotchkiss, R.; et al. Deep neural network improves fracture detection by clinicians. Proc. Natl. Acad. Sci. USA 2018, 115, 1–6. [Google Scholar] [CrossRef] [PubMed]
  147. Aware Home Research Initiative (AHRI). Available online: http://awarehome.imtc.gatech.edu/ (accessed on 29 November 2018).
  148. MavHome: Managing an Adaptive Versatile Home. Available online: http://ailab.wsu.edu/mavhome/ (accessed on 30 November 2018).
  149. Cook, D.J.; Youngblood, M.; Heierman, E.O.; Gopalratnam, K.; Rao, S.; Litvin, A.; Khawaja, F. MavHome: An agent-based smart home. In Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, 2003. (PerCom 2003), Fort Worth, TX, USA, 26 March 2003; pp. 521–524. [Google Scholar]
  150. House_n Materials and Media. Available online: http://web.mit.edu/cron/group/house_n/publications.html (accessed on 29 November 2018).
  151. Intille, S.S. The Goal: Smart People, Not Smart Homes. In Proceedings of the International Conference on Smart Homes and Health Telematics, Belfast, Northern Ireland, 26–28 June 2006; pp. 1–4. [Google Scholar]
  152. Shafer, S.; Krumm, J.; Brumitt, B.; Meyers, B.; Czerwinski, M.; Robbins, D. The New EasyLiving Project at Microsoft Research. In Proceedings of the Joint DARPA/NIST Smart Spaces Workshop, Gaithersburg, MD, USA, 30–31 Jully 1998; Volume 5. [Google Scholar]
  153. Helal, A.; Mann, W. Gator Tech Smart House: A Programmable Pervasive Space. IEEE Comput. Mag. 2005, 64–74. [Google Scholar] [CrossRef]
  154. Pigot, H.; Lefebvre, B. The Role of Intelligent Habitats in Upholding Elders in Residence. WIT Trans. Biomed. Health 2003. [Google Scholar] [CrossRef]
  155. Lesser, V.; Atighetchi, M.; Benyo, B.; Horling, B.; Raja, A.; Vincent, R.; Wagner, T.; Xuan, P.; Zhang, S. A Multi-Agent System for Intelligent Environment Control; UMass Computer Science Technical Report 1998-40; University of Massachusetts: Amherst, MA, USA, 1999. [Google Scholar]
  156. CASAS–Center for Advanced Studies in Adaptive Systems. Available online: http://casas.wsu.edu/ (accessed on 29 November 2018).
  157. Ghods, A.; Caffrey, K.; Lin, B.; Fraga, K.; Fritz, R.; Schmitter-Edgecombe, M.; Hundhausen, C.; Cook, D.J. Iterative Design of Visual Analytics for a Clinician-in-the-loop Smart Home. IEEE J. Biomed. Heal. Inf. 2018, 2168–2194. [Google Scholar] [CrossRef] [PubMed]
  158. Yang, H.I.; Babbitt, R.; Wong, J.; Chang, C.K. A framework for service morphing and heterogeneous service discovery in smart environments. In International Conference on Smart Homes and Health Telematics; Springer: Berlin, Germany, 2012; pp. 9–17. [Google Scholar]
  159. Heinz, M.; Martin, P.; Margrett, J.A.; Yearns, M.; Franke, W.; Yang, H.I. Perceptions of technology among older adults. J. Gerontol. Nurs. 2013. [Google Scholar] [CrossRef] [PubMed]
  160. Wang, F.; Skubic, M.; Rantz, M.; Cuddihy, P.E. Quantitative gait measurement with pulse-doppler radar for passive in-home gait assessment. IEEE Trans. Biomed. Eng. 2014, 61, 2434–2443. [Google Scholar] [CrossRef] [PubMed]
  161. Rochester. Available online: https://www.rochester.edu/pr/Review/V64N3/feature2.html (accessed on 30 November 2018).
  162. Du, K.K.; Wang, Z.L.; Hong, M. Human machine interactive system on smart home of IoT. J. China Univ. Posts Telecommun. 2013, 20, 96–99. [Google Scholar] [CrossRef]
  163. Bissoli, A.L.C. Multimodal Solution for Interaction with Assistance and Communication Devices; Federal University of Espirito Santo (UFES): Vitoria, Espirito Santo, Brazil, 2016. [Google Scholar]
  164. Bangor, A.; Philip, K.; James, M. Determining what individual SUS scores mean: Adding an adjective rating scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
Figure 1. System overview.
Figure 1. System overview.
Sensors 19 00859 g001
Figure 2. Functional model of the eye/gaze-tracking-based control system.
Figure 2. Functional model of the eye/gaze-tracking-based control system.
Sensors 19 00859 g002
Figure 3. Connectivity of the assistive system.
Figure 3. Connectivity of the assistive system.
Sensors 19 00859 g003
Figure 4. GlobalBox (gBox) of the assistive system.
Figure 4. GlobalBox (gBox) of the assistive system.
Sensors 19 00859 g004
Figure 5. Detection and storage of an IR command.
Figure 5. Detection and storage of an IR command.
Sensors 19 00859 g005
Figure 6. Circuit of IR emission commands previously stored.
Figure 6. Circuit of IR emission commands previously stored.
Sensors 19 00859 g006
Figure 7. User interface.
Figure 7. User interface.
Sensors 19 00859 g007
Figure 8. Able-bodied participant testing the system in the smart home.
Figure 8. Able-bodied participant testing the system in the smart home.
Sensors 19 00859 g008
Figure 9. System Usability Scale (SUS) score of each of the ten items of the SUS.
Figure 9. System Usability Scale (SUS) score of each of the ten items of the SUS.
Sensors 19 00859 g009
Figure 10. Participant with disabilities testing the system at her home.
Figure 10. Participant with disabilities testing the system at her home.
Sensors 19 00859 g010
Table 1. Summary of system usage information.
Table 1. Summary of system usage information.
DateStartEndDurationCommands
09/14/201814:2018:1003:50:00163
09/28/201812:5119:4906:58:00131
09/20/201813:5815:5501:57:0036
09/22/201815:5217:3601:44:0018
10/02/201817:3818:0100:23:0031
10/04/201814:1518:0403:49:0088
10/05/201814:3515:5601:21:0075
Total 20:02:00542
Table 2. Hourly distribution of the commands throughout the days of use of the assistive system.
Table 2. Hourly distribution of the commands throughout the days of use of the assistive system.
FromToDay 1Day 2Day 3Day 4Day 5Day 6Day 7Percentage
001201000000%
121301000000%
13140627000013%
14159472200753142%
15165721713034427%
161706010001%
1718000422806%
1819123005004%
1920031000006%
202100000000%
212200004201%
220000000000%
Total1631313618318875542

Share and Cite

MDPI and ACS Style

Bissoli, A.; Lavino-Junior, D.; Sime, M.; Encarnação, L.; Bastos-Filho, T. A Human–Machine Interface Based on Eye Tracking for Controlling and Monitoring a Smart Home Using the Internet of Things. Sensors 2019, 19, 859. https://doi.org/10.3390/s19040859

AMA Style

Bissoli A, Lavino-Junior D, Sime M, Encarnação L, Bastos-Filho T. A Human–Machine Interface Based on Eye Tracking for Controlling and Monitoring a Smart Home Using the Internet of Things. Sensors. 2019; 19(4):859. https://doi.org/10.3390/s19040859

Chicago/Turabian Style

Bissoli, Alexandre, Daniel Lavino-Junior, Mariana Sime, Lucas Encarnação, and Teodiano Bastos-Filho. 2019. "A Human–Machine Interface Based on Eye Tracking for Controlling and Monitoring a Smart Home Using the Internet of Things" Sensors 19, no. 4: 859. https://doi.org/10.3390/s19040859

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop