Driving Control of a Powered Wheelchair Considering Uncertainty of Gaze Input in an Unknown Environment

This paper describes the motion control system for a powered wheelchair using eye gaze in an unknown environment. Recently, new Human-Computer Interfaces (HCIs) that have replaced joysticks have been developed for a person with a disability of the upper body. In this paper, movement of the eyes is used as an HCI. The wheelchair control system proposed in this study aims to achieve an operation such that a passenger gazes towards the direction he or she wants to move in the unknown environment. Implementation of such an operating method facilitates easy and accurate movement of the wheelchair even in complicated environments comprising passages on the same side. This paper presents a system based on gaze detection and environment recognition that are integrated by the fuzzy set theory in real time. In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze. Moreover, we design it with consideration of uncertain gaze input by using the value of gaze detection accuracy. Moreover, we achieve obstacle avoidance by integrating the information of obstacles. This motion control system can support safe and smooth movement of the wheelchair by automatically calculating its direction of motion and velocity, to avoid obstacles and move in the gaze direction of the passenger. The effectiveness of the proposed system is demonstrated through experiments in a real environment.


Introduction
While a powered wheelchair is an important assistive product for a person who is physically handicapped, a person with a disability of the upper body cannot use a joystick.To this end, there has been the recent development of Human-Computer Interfaces (HCIs) that have replaced joysticks.Examples of these HCIs include voice control [1], brain-machine interfaces (BMI) [2], facial muscles [3], and eye blinks [4].In this study, movement of the eyes is used as an HCI.It is known that the muscle around the eyes has a long-lasting functionality, so this HCI has a high likelihood of being available even if a person cannot move his or her mouth, face, or neck.
There are several conventional methods that use the movement of eyes as an HCI.Al-Haddad et al. proposed an electrooculography (EOG) based control algorithm for target navigation [5][6][7]; however, their technique necessitated attachment of surface electrodes around the eyes of the operator, and two modes of wheelchair control were proposed-manual and automatic.In the manual mode, the passenger inputs a turn right or turn left signal by looking towards the right or left.Likewise, one may move forward or stop by looking up or down, respectively.In the automatic mode of operation, the user looks towards a desired destination and blinks (right to start and left to stop) to starts navigating the wheelchair to the target position.However, the operation in this mode is constrained by a well-defined and known environment.Pingali and colleagues proposed a method that inputs a turn right and turn left signal by moving the gaze direction to the right and left and moves forward and stops by moving the gaze direction up and down using the head gear with EOG electrodes [8].Matsumoto proposed a method in which the gaze direction is detected by processing images, using two charge coupled device cameras [9].The self-position and the environment are recognized using a laser range finder (LRF) and a map that is created in advance.Consequently, the gaze position of the passenger in the environment is estimated.However, it is difficult for the wheelchair to move in the unknown environment because the map created in advance is needed.
Conventional methods of controlling wheelchair systems in unknown environments also exist.A wheelchair exploring an unknown environment requires real-time map generation and path planning to ensure accurate obstacle-avoiding navigation.This is accomplished by recognizing the surrounding environment through use of electronic sensors.Examples of these systems include those described in [10,11], NavChair developed by Simpson et al. [12], SENARIO (Sensor-Aided Intelligent Wheelchair Navigation) developed by Katevas et al. [13], and the Robchair developed by Pires et al. [14].
This paper proposes an eye-gaze controlled wheelchair system for navigating through unknown environments.Mohamad and his colleagues proposed a system for controlling wheelchairs by eye gaze in unknown environments [15].Their system comprised eye-tracking glasses, depth camera to capture the geometry of the ambient space, and a set of ultrasound and infrared sensors to detect obstacles.The passenger provides inputs to the wheelchair system to move forward, stop, and turn left or right by looking up, down, left, or right, respectively, along arrows displayed on a laptop placed in front of the passenger.
However, this method was found to be ineffective when dealing with complicated environments, because of the constraints associated with input directions.This paper refers to complicated environments as those wherein some passages exist on the same side (either left or right) as shown in Figure 1.This renders the navigation operation difficult and complicated.
In view of the above difficulties, the wheelchair control system proposed in this study aims to achieve an operation wherein the passenger actually gazes towards the direction in which he or she wishes to move in an unknown environment.Implementation of such an operating method facilitates easy and accurate movement of the wheelchair even in complicated environments comprising passages on the same side.
The proposed system utilizes an eye tracker and RGB (Red, Green, Blue) camera, for detecting passenger gaze, and an LRF for environment recognition.All information captured by various sensors are integrated in real time using the fuzzy set theory, thereby facilitating accurate detection of passenger-gaze direction along with presence of obstacles and passages in the actual environment.In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze.Moreover, the gaze direction is filtered to suppress the unnatural movement caused by the observation noise of two eye cameras.In addition, only the information with high gaze detection accuracy is used and the value of detection accuracy is used for designing fuzzy set theory.In this way, the system that considers uncertainty of gaze input is designed.Moreover, we achieve obstacle avoidance by integrating the information of obstacles.Subsequently, the correct speed and direction of motion to avoid these obstacles are determined by moving along the gaze direction of the passenger.This control system is an extension of the conventional method [4] and ensures safe and smooth movement of the wheelchair.
In order to demonstrate the effectiveness of the proposed method, we performed real-life experiments in a complicated environment.Also, a long distance movement experiment was carried out.

Sensors
Figure 2 shows the sensors used in the proposed motion control system of the wheelchair.The eye tracker (Pupil Labs Pupil) [16] detects the passenger's gaze point in the environment.It has a world camera and two eye cameras.An RGB camera (Microsoft Kinect for Windows v2) (Microsoft Co., Redmond, WA, USA) [17] records the images in the RGB model (defined as RGB camera image).By matching the RGB camera image to that captured by the world camera, the gaze point in the passenger environment is detected.An LRF (Hokuyo UST-10LX) (Hokuyo Automatic Co., Ltd., Osaka, Japan) [18] is used to detect passages and obstacles.
The angle of view of the RGB camera is in the range of −35-35° with the forward direction set along the zero-degree direction.The eye tracker can obtain images in the RGB model (defined as world camera image), as shown in Figure 2. It can also obtain the 2D coordinate of the gaze point in the world camera by detecting a pupil using the two eye cameras.The angle of view of the world camera is in the range of −50-50° with the forward direction set along the zero-degree direction.The LRF can acquire distance data in the range of −135-135°, up to 10 m, with the forward direction set along the zero-degree direction and the angular resolution is 0.25°.In this study, we acquire the distance data in the range of −90-90°, and the angular resolution is 1°.
A two-dimensional coordinate system is used in this study in the RGB camera image with the origin located at the top left corner of the screen.The coordinate system is defined as the RGB camera coordinate system ( , ) c c x y .This image size is 1920 × 1080 pixels.The two-dimensional coordinate system in the world camera image with the origin located at the top left corner of the screen is defined as the world camera coordinate system ( , ) e e x y .This image size is 1280 × 720 pixels.Furthermore, the two-dimensional coordinate system with the origin located at the wheelchair in the environment is defined as the wheelchair coordinate system ( , ) w w X Y .

Sensors
Figure 2 shows the sensors used in the proposed motion control system of the wheelchair.The eye tracker (Pupil Labs Pupil) [16] detects the passenger's gaze point in the environment.It has a world camera and two eye cameras.An RGB camera (Microsoft Kinect for Windows v2) (Microsoft Co., Redmond, WA, USA) [17] records the images in the RGB model (defined as RGB camera image).By matching the RGB camera image to that captured by the world camera, the gaze point in the passenger environment is detected.An LRF (Hokuyo UST-10LX) (Hokuyo Automatic Co., Ltd., Osaka, Japan) [18] is used to detect passages and obstacles.
The angle of view of the RGB camera is in the range of −35-35 • with the forward direction set along the zero-degree direction.The eye tracker can obtain images in the RGB model (defined as world camera image), as shown in Figure 2. It can also obtain the 2D coordinate of the gaze point in the world camera by detecting a pupil using the two eye cameras.The angle of view of the world camera is in the range of −50-50 • with the forward direction set along the zero-degree direction.The LRF can acquire distance data in the range of −135-135 • , up to 10 m, with the forward direction set along the zero-degree direction and the angular resolution is 0.25 • .In this study, we acquire the distance data in the range of −90-90 • , and the angular resolution is 1 • .
A two-dimensional coordinate system is used in this study in the RGB camera image with the origin located at the top left corner of the screen.The coordinate system is defined as the RGB camera coordinate system (x c , y c ).This image size is 1920 × 1080 pixels.The two-dimensional coordinate system in the world camera image with the origin located at the top left corner of the screen is defined as the world camera coordinate system (x e , y e ).This image size is 1280 × 720 pixels.Furthermore, the two-dimensional coordinate system with the origin located at the wheelchair in the environment is defined as the wheelchair coordinate system (X w , Y w ).

. System Flow
The control system proposed in this study integrates the input gaze direction and environment information and determines the correct speed and direction of motion in real time.Figure 3 shows the system flow.The control system has four stages: input, environment recognition, integration by the fuzzy set theory, and output.

System Flow
The control system proposed in this study integrates the input gaze direction and environment information and determines the correct speed and direction of motion in real time.Figure 3 shows the system flow.The control system has four stages: input, environment recognition, integration by the fuzzy set theory, and output.

. System Flow
The control system proposed in this study integrates the input gaze direction and environment information and determines the correct speed and direction of motion in real time.Figure 3 shows the system flow.The control system has four stages: input, environment recognition, integration by the fuzzy set theory, and output.First, gaze point (x e g , y e g ) and environmental information are obtained as input.In case of the eye gaze input, the passenger is instructed to gaze in the intended direction of motion; as a result, the wheelchair moves in that direction.
Second, the environment is recognized based on the environmental information obtained from the LRF.In this step, obstacles (X w o , Y w o ) and passages (X w p , Y w p ) through which the wheelchair can move are detected at approximately the same time.
Finally, through use of this information, the direction of motion is determined based on the fuzzy set theory with due consideration of the direction in which the passenger wishes to move whilst ensuring safety and avoidance of obstacles.
Section 2.2 describes the method to obtain gaze direction as input, while Section 2.3 describes the passage detection method using LRF.Section 2.4 describes the design of the motion control system based on the fuzzy set theory.

Obtaining the Gaze Direction ϕ g
Since we wish to integrate information concerning gaze direction at a later point along with the environmental information acquired by the LRF, this step describes the method to obtain gaze direction ϕ g with the front of the wheelchair coordinate system aligned at 0 • .The eye tracker is attached to the head of the passenger and moves freely against the wheelchair.This facilitates use of not only the information recorded by the eye tracker but also that captured by the RGB camera fixed to the wheelchair.First, the gaze point in the world camera coordinate system (x e g , y e g ) is converted to the RGB camera coordinate system (x c g , y c g ).Subsequently, the gaze point in the RGB camera coordinate system is converted into gaze direction ϕ g relative to the wheelchair coordinate system.The gaze direction is then successively obtained and filtered.This makes it possible to acquire gaze information with higher reliability by excluding those movements of the passenger that do not qualify as gaze motion.
2.2.1.Obtaining Gaze Point in the World Camera Coordinate System (x e g , y e g ) First, coordinates of the gaze point in the world camera coordinate system are acquired by the eye tracker.The eye tracker obtains coordinates of the gaze point by image-processing the position of the pupil by using left-and right-eye images captured by the eyeball camera.At this time, the precision with which the pupil may be detected differs depending on how the pupil appears.The pupil detection accuracy S pupil is represented by a value between 0 to 1.In this study, through use of only the gaze point information S pupil > 0.6, reliable gaze information was obtained.The above threshold value was determined through preliminary verification.This value S pupil is one of two values affecting the gaze detection accuracy.(x c g , y c g ) World camera coordinates of the gaze point were converted into the RGB camera coordinate system to facilitate acquisition of the gaze point in the RGB camera image.To this end, an image-matching exercise, called template matching, was performed, and coordinates of the gaze point were translated and converted using the results of template matching.This image matching is used because the processing speed is fast.

Obtaining the Gaze Point in the RGB Camera Coordinate System
Template matching is a method of checking whether patterns similar to those existing in a certain region of a template image are present in a whole image.Examples of a template matching are shown in Figure 4.The similarity S NCC (x c , y c ) is calculated using the luminance value of the image while sequentially moving the template image over the whole image to identify the region demonstrating highest similarity.Here, determination of the similarity utilizes the Normalized Cross-Correlation (NCC).NCC is calculated using Equation (1).T(i, j) represents the luminance value of the template image while I(i, j) represents that of the whole image.NCC indicates that the closer its value is to 1, the higher the similarity is.
In the proposed study, a part of the world camera image was cut out as the template image, and the RGB camera image was considered the whole image.In order to speed up the processing, the size of the RGB camera image was compressed from 1920 × 1080 pixels to 384 × 216 pixels while that of the world camera image was reduced from 1280 × 720 pixels to 344 × 194 pixels.From the size of the world camera image, cut out 172 × 96 pixels which is about half size in both length and width.The part to be cut off forms the part centered at the point of regard.Figure 4a-c   S NCC (x c , y c ) = In the proposed study, a part of the world camera image was cut out as the template image, and the RGB camera image was considered the whole image.In order to speed up the processing, the size of the RGB camera image was compressed from 1920 × 1080 pixels to 384 × 216 pixels while that of the world camera image was reduced from 1280 × 720 pixels to 344 × 194 pixels.From the size of the world camera image, cut out 172 × 96 pixels which is about half size in both length and width.The part to be cut off forms the part centered at the point of regard.Figure 4a-c depicts examples of the RGB camera image, gaze point captured in the world camera image, and the template image in the crossroad.
Template matching is performed using the above images.The pixel coordinate when the similarity S NCC (x c , y c ) is highest is defined as x c NCC , y c NCC .At this time, the gaze point in the world camera image is transformed into the RGB camera image using Equation (2).
x c g , y c g = x e g , y e g + (x c NCC , y c NCC ) In this study, we use of only the matching result when x c NCC , y c NCC is less than 50 pixels compared with the value before a time as shown in the following equation.Through use of only the matching result S NCC x c NCC , y c NCC > 0.9, we obtained more reliable gaze information.This value S NCC x c NCC , y c NCC is the other of the two values affecting the gaze detection accuracy.

Determining Gaze Direction in the Wheelchair Coordinate System ϕ g
Gaze-point coordinates in the RGB camera image (x c g , y c g ) were converted into the gaze direction ϕ g in the wheelchair coordinate system.Since both, the LRF and RGB camera are located at the same position in two-dimensional coordinates, the angle between the RGB camera and gaze point is identical to that between the LRF and gaze point.This defines the gaze direction ϕ g in the wheelchair coordinate system.From the geometrical relationship depicted in Figure 5, the gaze direction ϕ g is obtained from Equation (4).For the example illustrated in Figure 5, the gaze direction is In the proposed study, determination of the gaze direction is performed in real time.The total sampling time is approximately 75 ms.Filtering is performed so as to exclude unnatural gaze movement.The filtering operation is performed using a low-pass filter that blocks signals above a specific frequency (1/6 Hz in this case).It is believed that sliding eyeball motion tracking is capable of tracking visual targets moving at speeds of the order of 30 • /s [19].Sliding eyeball motion essentially relates to eye movements that occur while tracking a moving visual object.For the purpose of this study, eye movement speeds of the order of 60 • /s or higher are considered unnatural and are, therefore, discarded.In line with the above consideration, a fourth-order low-pass filter, described by Equation ( 5), is designed with a cutoff frequency of 1/6 Hz and sampling frequency of 1/0.075Hz.The order of the low pass filter was determined through preliminary verification.φg = −0.0158ϕg (t) + 0.2502ϕ g (t − 1) + 0.5319ϕ g (t − 2) + 0.2502ϕ g (t − 3) − 0.0158ϕ g (t − 4) (5) In addition, although the control cycle of eye gaze detection is 75 ms, as described in Sections 2.2.1 and 2.2.2, when the pupil detection accuracy S pupil or the image matching accuracy S NCC is less than the threshold value, it is excluded as gaze information with low reliability.In that case, it is necessary to undertake filtering by using the previous value ϕ g (t) = ϕ g (t − 1).The pupil detection accuracy S pupil is less than the threshold value, for example, when the eyes are not opened properly.The image matching accuracy S NCC is less than the threshold value when the environment with less features is in front of the wheelchair.
In this manner, relevant information concerning gaze direction is obtained.

Passage Detection by LRF
The passage detection by the LRF extends the conventional method [4].The conventional method aims to provide an algorithm applicable to a real environment, and detect passages through which a wheelchair can move even in environments with passages and obstacles of various shapes.In particular, obstacles are grouped from the depth information of an LRF, and distances between obstacles are calculated to detect a gap for wheelchair passage ( , ) w w X Y .

Passage Detection by LRF
The passage detection by the LRF extends the conventional method [4].The conventional method aims to provide an algorithm applicable to a real environment, and detect passages through which a wheelchair can move even in environments with passages and obstacles of various shapes.In particular, obstacles are grouped from the depth information of an LRF, and distances between obstacles are calculated to detect a gap for wheelchair passage (X w p , Y w p ).

Adaptation to Barrier-Free Environment
In the conventional method, a passage width of 1.2-2.5 m is assumed, and an LRF with a range of 4 m is used.However, in recent years, construction of barrier-free buildings such as hospitals and welfare facilities has increased, and it is now a requirement to widen the passage widths of public facilities [20].The Ministry of Health, Labor and Welfare, through the Social Security Council (Medical Subcommittee), have put up a requirement that medical facilities should have a passage width of at least 2.7 m for those undergoing prolonged medical [21].
Therefore, in this study, the maximum value of the passage width and LRF range were set to w max = 3.5 m and 10 m, respectively so that passage detection could be performed even in a barrier-free environment such as a hospital.
Figure 6a shows the result of passage detection at the environment as shown in Figure 1 using the wheelchair coordinate system and an LRF.This environment includes a passage measuring more than 2.7 m in length.(X w p , Y w p ) represents center coordinates of the detected passage.Furthermore, we confirmed that a passage can be detected at a distance of about 8 m from the wheelchair.

Passage Detection by LRF
The passage detection by the LRF extends the conventional method [4].The conventional method aims to provide an algorithm applicable to a real environment, and detect passages through which a wheelchair can move even in environments with passages and obstacles of various shapes.In particular, obstacles are grouped from the depth information of an LRF, and distances between obstacles are calculated to detect a gap for wheelchair passage ( , )

Adaptation to Barrier-Free Environment
In the conventional method, a passage width of 1.2-2.5 m is assumed, and an LRF with a range of 4 m is used.However, in recent years, construction of barrier-free buildings such as hospitals and welfare facilities has increased, and it is now a requirement to widen the passage widths of public facilities [20].The Ministry of Health, Labor and Welfare, through the Social Security Council (Medical Subcommittee), have put up a requirement that medical facilities should have a passage width of at least 2.7 m for those undergoing prolonged medical [21].
Therefore, in this study, the maximum value of the passage width and LRF range were set to max 3.5m w  and 10 m, respectively so that passage detection could be performed even in a barrierfree environment such as a hospital.
Figure 6a shows the result of passage detection at the environment as shown in Figure 1 using the wheelchair coordinate system and an LRF.This environment includes a passage measuring more

Interpolation of Passage Detection
In this study, we interpolate the passage.In the conventional method, passage detection is performed at each time point.However, there is a moment when passage detection is not performed.Therefore, in this study we save the center coordinates of the passage once detected.In this study, it is assumed that the system is performed in an unknown environment which does not have an environmental map in advance.Therefore, the coordinates of the passage center are saved as absolute coordinates with the movement start position as the origin.Then, the number of times passage center coordinates included within 0.7 m from the saved coordinates are detected is counted.When this count exceeds 400 times, the passage center coordinates are assumed to be reliable.Then, if no passage center included within 0.7 m from the coordinates has been detected after that, interpolate the passage center coordinates ( Xw p , Ŷw p ) at that coordinate.By interpolating the passages in this manner, more accurate passage detection is performed.Figure 6b shows the result of interpolation of the passage center coordinates when turning right.The interpolated center coordinates are marked with a red circle.

Design of Motion Control System Based on the Fuzzy Set Theory
In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze.Moreover, we achieve obstacle avoidance by integrating the information of obstacles.It integrates the gaze detection input and information of the surroundings to determine the direction and speed of the wheelchair.By creating membership functions (MFs) based on various rules and integrating them using the fuzzy set theory, an output satisfying each requirement becomes possible.The MF is a function obtained by plotting the relative angle of the wheelchair on the horizontal axis and grading each direction on the vertical axis.A high grade indicates the passenger's desire to move toward a particular direction.In this study, MF µ g for moving in the gaze direction, MF µ p for moving in the passage direction, and MF µ o for avoiding obstacles are respectively created and integrated by the fuzzy set theory to determine the speed and direction of motion.

MF for Gaze Direction µ g
In order to move the wheelchair along the gaze direction, a normally distributed MF µ g was created, as depicted in Figure 7.At this instant, the vertex of the MF as placed in line with the gaze direction ϕ g .Since the base of the normal distribution curve is usually determined through standard deviation, the standard deviation was set to 20 • through preliminary verification.The higher the value of the vertex grade µ top g , the more it tends to move along the gaze direction.Therefore, through use of pupil-detection accuracy S pupil and similarity S NCC of the template-matching exercise, calculated during acquisition of gaze direction in Section 2.2, Equation ( 6) for calculating the vertex grade may be deduced.As a result, when gaze direction is highly reliable, the vertex grade tends to be large.A similar trend exists when the gaze direction has low reliability.

MF for Passage Direction p 
In order to move the wheelchair in the passage direction, a triangular MF p  is created, as depicted in Figure 8.At that instant, the MF vertex is placed along the direction of the center of the passage p  .The base of the triangle was set to 120° through preliminary verification.The higher the value of the vertex grade is 1.To avoid sudden start, its value gradually increases from the beginning of movement.

MF for Passage Direction µ p
In order to move the wheelchair in the passage direction, a triangular MF µ p is created, as depicted in Figure 8.At that instant, the MF vertex is placed along the direction of the center of the passage ϕ p .The base of the triangle was set to 120 • through preliminary verification.The higher the value of the vertex grade is 1.To avoid sudden start, its value gradually increases from the beginning of movement.

MF for Passage Direction p 
In order to move the wheelchair in the passage direction, a triangular MF p  is created, as depicted in Figure 8.At that instant, the MF vertex is placed along the direction of the center of the passage p  .The base of the triangle was set to 120° through preliminary verification.The higher the value of the vertex grade is 1.To avoid sudden start, its value gradually increases from the beginning of movement.This makes it possible to move while maintaining the observation point and distance th d .The grade a , which is the restrained portion, is calculated using equation ( 6) according to the distance between the wheelchair and the obstacle i l and the distance required to avoid th l .The th l parameter suppresses the grade when the obstacle is within close range, but not when it is far.
The margin with obstacles is 0.3m

MF for Obstacles µ o
To avoid obstacles, MF µ o is created.As shown in Figure 9, this is an MF for avoiding obstacles, and it is created by a geometrical relationship with an obstacle.
The concave type MF µ o suppresses the grade in the contact direction with the obstacle ϕ o .This makes it possible to move while maintaining the observation point and distance d th .The grade a, which is the restrained portion, is calculated using Equation ( 7) according to the distance between the wheelchair and the obstacle l i and the distance required to avoid l th .The l th parameter suppresses the grade when the obstacle is within close range, but not when it is far.
The margin with obstacles is d th = 0.3 m.As l th increases, safety is initially improved, but if you want to approach, you avoid it unnecessarily.For this reason, we set l th = 4.0 m in this study.shown in equation ( 7) is obtained.
Second, based on the grade of the largest value mix  , the moving direction out  and the speed out v of the wheelchair are calculated from the equations ( 8) and ( 9).max v is the maximum speed of the wheelchair, and in this study we set max 0.5m s v  .Figure 10 shows the results of the integration process.The gaze and passage direction are set as the target direction, and the wall is used as an obstacle to suppress the grade in that direction; finally, the moving direction out  is determined.
arg max( ) Second, based on the grade of the largest value µ mix , the moving direction ϕ out and the speed v out of the wheelchair are calculated from the Equations ( 9) and (10).v max is the maximum speed of the wheelchair, and in this study we set v max = 0.5 m/s. Figure 10 shows the results of the integration process.The gaze and passage direction are set as the target direction, and the wall is used as an obstacle to suppress the grade in that direction; finally, the moving direction ϕ out is determined.
v out = µ mix (ϕ out ) • v max (10) Second, based on the grade of the largest value mix  , the moving direction out  and the speed out v of the wheelchair are calculated from the equations ( 8) and ( 9).max v is the maximum speed of the wheelchair, and in this study we set max 0.5m s v  .Figure 10 shows the results of the integration process.The gaze and passage direction are set as the target direction, and the wall is used as an obstacle to suppress the grade in that direction; finally, the moving direction out  is determined.

Outline of Experiment
Experiments using a powered wheelchair in a complicated environment where there are multiple passages on the same side as shown in Figure 1 to verity the effectiveness of the proposed system.Three volunteers (two men and one woman, mean age 23.3 ± 0.47 years) were recruited as subjects for this study.They were instructed to gaze towards the direction in which he or she wanted to turn.We confirmed that they could select passages in front and back of the right side and turn right.Section 3.2 describes the detail of these experiments.

Outline of Experiment
Experiments using a powered wheelchair in a complicated environment where there are multiple passages on the same side as shown in Figure 1 to verity the effectiveness of the proposed system.Three volunteers (two men and one woman, mean age 23.3 ± 0.47 years) were recruited as subjects for this study.They were instructed to gaze towards the direction in which he or she wanted to turn.We confirmed that they could select passages in front and back of the right side and turn right.Section 3.2 describes the detail of these experiments.
Also, a long distance movement experiment as shown in Figure 11 is carried out.One volunteer (one man, age 23 years) was recruited as a subject for this study.He was instructed to gaze towards the direction in which he wanted to turn.We confirmed that he could turn left and turn right once and move a long distance.Section 3.3 describes the detail of this experiment.Also, a long distance movement experiment as shown in Figure 11 is carried out.One volunteer (one man, age 23 years) was recruited as a subject for this study.He was instructed to gaze towards the direction in which he wanted to turn.We confirmed that he could turn left and turn right once and move a long distance.Section 3.3 describes the detail of this experiment.The maximum speed of the wheelchair was set to 0.50 m/s.In addition, the control cycle of obtaining of gaze direction is about 75 ms and the control cycle of LRF is about 25 ms.The maximum speed of the wheelchair was set to 0.50 m/s.In addition, the control cycle of obtaining of gaze direction is about 75 ms and the control cycle of LRF is about 25 ms.

Verification of the Experiments in the Complicated Environment
Figure 12 depicts the operating state of the experiment of one subject-wheelchair trajectory, time history of gaze direction ϕ g , direction of movement ϕ out , and speed ϕ out -when the subject performed a right turn on the front and back passages.Figures 13 and 14 demonstrate gaze detection results, state of the experiment, result obtained from integration of MF based on environmental recognition and fuzzy set theory when the subject performed a right turn on the front and back passage.Figure 15 shows the details of Figure 13(d-3,e-3,d-4).Figure 16 shows the details of Figure 14(d-3,e-3,d-4,e-4).Video S1-1 and S2-1 shows the state of the experiment, S1-2 and S2-2 shows the environmental recognition, S1-3 and S2-3 shows the MF when the subject performed a right turn on the front and back passage.
(2)  We confirmed that the gaze and passage were able to be detected at each time in the real environment.In the section A in Figure 13a, the subject is gazing at the passage in front, and as a result the direction of movement changes to the right direction, and he turned right at the front passage.Figure 13a,b show that the subject is gazing near the direction 5 p


which is the front passage, but because the wall is approaching, the output direction out  is determined to the left of the direction 13c,d show that the subject is gazing near the direction 4 p  which is the passage, and there is no wall in that direction, so the output direction out  is determined near the direction In the section B in Figure 13b, the subject is gazing at the back passage, and as a result the direction of movement goes in a straight line and he did not turn right at the front passage.Figure 15a,b show that the subject is gazing near the direction 3 p  which is the back passage, but because the wall is approaching, the output direction out  is determined to the left of the direction 3 p  .In section C, the subject is gazing at the back passage, and as a result the direction of movement changes to the right direction, and he turned right at the back passage.Figure 15c,d show that the subject is gazing near the direction 1 p  which is the back passage, and there is no wall in that direction, so the output direction out  is determined near the direction  We confirmed that the gaze and passage were able to be detected at each time in the real environment.In the section A in Figure 13a, the subject is gazing at the passage in front, and as a result the direction of movement changes to the right direction, and he turned right at the front passage.Figure 13a,b show that the subject is gazing near the direction ϕ p5 which is the front passage, but because the wall is approaching, the output direction ϕ out is determined to the left of the direction ϕ p5 .Figure 13c,d show that the subject is gazing near the direction ϕ p4 which is the front passage, and there is no wall in that direction, so the output direction ϕ out is determined near the direction ϕ p4 .
In the section B in Figure 13b, the subject is gazing at the back passage, and as a result the direction of movement goes in a straight line and he did not turn right at the front passage.Figure 15a,b show that the subject is gazing near the direction ϕ p3 which is the back passage, but because the wall is approaching, the output direction ϕ out is determined to the left of the direction ϕ p3 .In section C, the subject is gazing at the back passage, and as a result the direction of movement changes to the right direction, and he turned right at the back passage.Figure 15c,d show that the subject is gazing near the direction ϕ p1 which is the back passage, and there is no wall in that direction, so the output direction ϕ out is determined near the direction ϕ p1 .
These results confirm the ability of a passenger to operate the wheelchair by merely gazing towards the direction in which he or she wishes to move in an unknown environment.
These results confirm the ability of a passenger to operate the wheelchair by merely gazing towards the direction in which he or she wishes to move in an unknown environment.These results confirm the ability of a passenger to operate the wheelchair by merely gazing towards the direction in which he or she wishes to move in an unknown environment.Simultaneously, the effectiveness of the proposed system during operation in a complex environment is also confirmed.Furthermore, it was confirmed by multiple subjects.Simultaneously, the effectiveness of the proposed system during operation in a complex environment is also confirmed.Furthermore, it was confirmed by multiple subjects.

Verification of the Long Distance Movement Experiment
Figure 17 shows the operating state of the experiment of the subject-wheelchair trajectory, time history of gaze direction ϕ g , direction of movement ϕ out , and speed ϕ out -when the subject performed a long distance movement.Figure 18 demonstrates gaze detection results, state of the experiment, result obtained from integration of MF based on environmental recognition and fuzzy set theory when the subject performed a long distance movement.Video S3-1 shows the state of the experiment, S3-2 shows the environmental recognition, S3-3 shows the MF when the subject performed a right turn on the front and back passage.
We confirmed that the gaze and passage were able to be detected at each time in the real environment.Firstly, in the sections (1) to (4) in Figures 17a and 18a,b, the subject is gazing at left turn direction, and as a result the direction of movement changes to the left direction in Figure 17b, and he turned left.Secondly, in the sections (6) to (9) in Figures 16a and 17a,b, the subject is gazing at right turn direction, and as a result the direction of movement changes to the right direction in Figure 17b, and he turned right.Finally, in the sections (10) and (11) in Figures 17a and 18a,b, the subject is gazing at straight direction, and as a result the direction of movement changes to the straight direction in Figure 17b, and he went straight.
This result confirms the ability of a passenger to operate the wheelchair by just gazing towards the direction in which he wants to move for a long distance.
turn direction, and as a result the direction of movement changes to the left direction in Figure 17b, and he turned left.Secondly, in the sections ( 6) to (9) in Figures 16a and 17a,b, the subject is gazing at right turn direction, and as a result the direction of movement changes to the right direction in Figure 17b, and he turned right.Finally, in the sections (10) and (11) in Figures 17a and 18a,b, the subject is gazing at straight direction, and as a result the direction of movement changes to the straight direction in Figure 17b, and he went straight.
This result confirms the ability of a passenger to operate the wheelchair by just gazing towards the direction in which he wants to move for a long distance.

Conclusions
The study focuses on the design of a wheelchair motion-control system operated by a passenger by gazing towards the direction in which he or she wishes to move in an unknown environment.Implementation of such an operating method facilitates easy and accurate movement of the wheelchair even in complicated environments comprising passages on the same side, facilitating selection of and movement along front and back passages.The proposed system employs an eye tracker and RGB camera for detecting the passenger's gaze and an LRF for environment recognition.The aggregate sensor information integrated in real time using the fuzzy set theory.In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze.Moreover, we achieve obstacle avoidance by integrating the information of obstacles.As a result, it is possible to detect the gaze direction of a passenger along with obstacles and passages that exist in the actual environment.Subsequently, the correct speed and direction of motion for avoiding these obstacles are determined while moving along the passenger's gaze direction.The effectiveness of the proposed method was verified by performing an experimenting involving an actual HCI in a real operating environment.
In future endeavors, the authors intend to perform involving various age groups and examine whether there exists a difference in gaze movement depending on the subject's age.In addition, they intend to verify the system's effectiveness in dynamic real-life environments involving other people moving in the scene.

Conclusions
The study focuses on the design of a wheelchair motion-control system operated by a passenger by gazing towards the direction in which he or she wishes to move in an unknown environment.Implementation of such an operating method facilitates easy and accurate movement of the wheelchair even in complicated environments comprising passages on the same side, facilitating selection of and movement along front and back passages.The proposed system employs an eye tracker and RGB camera for detecting the passenger's gaze and an LRF for environment recognition.The aggregate sensor information integrated in real time using the fuzzy set theory.In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze.Moreover, we achieve obstacle avoidance by integrating the information of obstacles.As a result, it is possible to detect the gaze direction of a passenger along with obstacles and passages that exist in the actual environment.Subsequently, the correct speed and direction of motion for avoiding these obstacles are determined while moving along the passenger's gaze direction.The effectiveness of the proposed method was verified by performing an experimenting involving an actual HCI in a real operating environment.
In future endeavors, the authors intend to perform involving various age groups and examine whether there exists a difference in gaze movement depending on the subject's age.In addition, they intend to verify the system's effectiveness in dynamic real-life environments involving other people moving in the scene.

Figure 2 .
Figure 2. (a) Sensor configuration; (b) The range of sensors; (c) RGB camera image obtained by RGB camera; (d) Environment recognition obtained by LRF; (e) World camera image obtained by Eye tracker; (f) Eye camera image obtained by Eye tracker 2.1.2.System Flow

Figure 2 .
Figure 2. (a) Sensor configuration; (b) The range of sensors; (c) RGB camera image obtained by RGB camera; (d) Environment recognition obtained by LRF; (e) World camera image obtained by Eye tracker; (f) Eye camera image obtained by Eye tracker.

Figure 2 .
Figure 2. (a) Sensor configuration; (b) The range of sensors; (c) RGB camera image obtained by RGB camera; (d) Environment recognition obtained by LRF; (e) World camera image obtained by Eye tracker; (f) Eye camera image obtained by Eye tracker 2.1.2.System Flow
highest similarity.Here, determination of the similarity utilizes the Normalized Cross-Correlation (NCC).NCC is calculated using Equation (1).  , T i j represents the luminance value of the template image while   , I i j represents that of the whole image.NCC indicates that the closer its value is to 1, the higher the similarity is.

Figure 4 .
Figure 4. Example of determination of gaze-point coordinates (a) RGB camera image (Whole image); (b) Gaze point in the world camera coordinate system; (c) Template image; (d) Checking similar patterns; (e)Result obtained from template-matching exercise; (f) Gaze point in the RGB camera coordinate system.

Figure 4 .
Figure 4. Example of determination of gaze-point coordinates (a) RGB camera image (Whole image); (b) Gaze point in the world camera coordinate system; (c) Template image; (d) Checking similar patterns; (e)Result obtained from template-matching exercise; (f) Gaze point in the RGB camera coordinate system.

Figure
Figure 4d-f show examples of the result obtained via template matching and gaze point in the RGB camera image in the crossroad.Through use of only the matching result S NCC x c NCC , y c NCC > 0.9, we obtained more reliable gaze information.This value S NCC x c NCC , y c NCC is the other of the two values affecting the gaze detection accuracy.

Figure 5 .
Figure 5. Determination of gaze direction g  .

Figure 6 .
Figure 6.Passage detection by laser range finder (LRF): (a) Result of passage detection; (b) Result of interpolation of the passage.

Figure 6 .
Figure 6.Passage detection by laser range finder (LRF): (a) Result of passage detection; (b) Result of interpolation of the passage.

Figure 11 .
Figure 11.The long distance movement environment.

Figure 12
Figure 12 depicts the operating state of the experiment of one subject-wheelchair trajectory, time history of gaze direction g  , direction of movement out  , and speed out  -when the subject performed a right turn on the front and back passages.Figures 13 and 14 demonstrate gaze detection results, state of the experiment, result obtained from integration of MF based on environmental recognition and fuzzy set theory when the subject performed a right turn on the front and back passage.

Figure 11 .
Figure 11.The long distance movement environment.

Figure 12 .
Figure 12. Results obtained when subjects executed a right turn on the front (1) and back (2) passage (a) wheelchair trajectory; (b) gaze direction g  ; (c) direction of movement out  ; (d) Speed out v .

Figure 12 .
Figure 12. Results obtained when subjects executed a right turn on the front (1) and back (2) passage (a) wheelchair trajectory; (b) gaze direction ϕ g ; (c) direction of movement ϕ out ; (d) Speed v out .

Figure 17 .Figure 17 .Figure 17 .Figure 18 .
Figure 17.Results obtained when the subject executed a long distance movement: (a) wheelchair trajectory; (b) gaze direction g  ; (c) direction of movement out  ; (d) Speed out v .
depicts examples of the RGB camera image, gaze point captured in the world camera image, and the template image in the crossroad.Template matching is performed using the above images.The pixel coordinate when the