Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = wearable gaze-tracker

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 2183 KiB  
Article
Evaluation of an Eye-Tracking-Based Method for Assessing the Visual Performance with Progressive Lens Designs
by Pablo Concepcion-Grande, Eva Chamorro, José Miguel Cleva, José Alonso and Jose A. Gómez-Pedrero
Appl. Sci. 2023, 13(8), 5059; https://doi.org/10.3390/app13085059 - 18 Apr 2023
Cited by 3 | Viewed by 3492
Abstract
Due to the lack of sensitivity of visual acuity (VA) measurement to quantify differences in visual performance between progressive power lenses (PPLs), in this study, we propose and evaluate an eye-tracking-based method to assess visual performance when wearing PPLs. A wearable eye-tracker system [...] Read more.
Due to the lack of sensitivity of visual acuity (VA) measurement to quantify differences in visual performance between progressive power lenses (PPLs), in this study, we propose and evaluate an eye-tracking-based method to assess visual performance when wearing PPLs. A wearable eye-tracker system (Tobii-Pro Glasses 3) recorded the pupil position of 27 PPL users at near and distance vision during a VA test while wearing three PPL designs: a PPL for general use (PPL-Balance), a PPL optimized for near vision (PPL-Near), and a PPL optimized for distance vision (PPL-Distance). The participants were asked to recognize eye charts at both near and distance vision using centered and oblique gaze directions with each PPL design. The results showed no statistically significant differences between PPLs for VA. However, significant differences in eye-tracking parameters were observed between PPLs. Furthermore, PPL-Distance had a lower test duration, complete fixation time, and number of fixations at distance evaluation. PPL-Near has a lower test duration, complete fixation time, and number of fixations for near vision. In conclusion, the quality of vision with PPLs can be better characterized by incorporating eye movement parameters than the traditional evaluation method. Full article
(This article belongs to the Special Issue Eye-Tracking Technologies: Theory, Methods and Applications)
Show Figures

Figure 1

23 pages, 7699 KiB  
Article
Testing Road Vehicle User Interfaces Concerning the Driver’s Cognitive Load
by Viktor Nagy, Gábor Kovács, Péter Földesi, Dmytro Kurhan, Mykola Sysyn, Szabolcs Szalai and Szabolcs Fischer
Infrastructures 2023, 8(3), 49; https://doi.org/10.3390/infrastructures8030049 - 9 Mar 2023
Cited by 18 | Viewed by 5014
Abstract
This paper investigates the usability of touch screens used in mass production road vehicles. Our goal is to provide a detailed comparison of conventional physical buttons and capacitive touch screens taking the human factor into account. The pilot test focuses on a specific [...] Read more.
This paper investigates the usability of touch screens used in mass production road vehicles. Our goal is to provide a detailed comparison of conventional physical buttons and capacitive touch screens taking the human factor into account. The pilot test focuses on a specific Non-driving Related Task (NDRT): the control of the on-board climate system using a touch screen panel versus rotating knobs and push buttons. Psychological parameters, functionality, usability and, the ergonomics of In-Vehicle Information Systems (IVIS) were evaluated using a specific questionnaire, a system usability scale (SUS), workload assessment (NASA-TLX), and a physiological sensor system. The measurements are based on a wearable eye-tracker that provides fixation points of the driver’s gaze in order to detect distraction. The closed road used for the naturalistic driving study was provided by the ZalaZONE Test Track, Zalaegerszeg, Hungary. Objective and subjective results of the pilot study indicate that the control of touch screen panels causes higher visual, manual, and cognitive distraction than the use of physical buttons. The statistical analysis demonstrated that conventional techniques need to be complemented in order to better represent human behavior differences. Full article
(This article belongs to the Special Issue Land Transport, Vehicle and Railway Engineering)
Show Figures

Figure 1

16 pages, 806 KiB  
Article
Eye Movement Alterations in Post-COVID-19 Condition: A Proof-of-Concept Study
by Cecilia García Cena, Mariana Campos Costa, Roque Saltarén Pazmiño, Cristina Peixoto Santos, David Gómez-Andrés and Julián Benito-León
Sensors 2022, 22(4), 1481; https://doi.org/10.3390/s22041481 - 14 Feb 2022
Cited by 17 | Viewed by 3923
Abstract
There is much evidence pointing out eye movement alterations in several neurological diseases. To the best of our knowledge, this is the first video-oculography study describing potential alterations of eye movements in the post-COVID-19 condition. Visually guided saccades, memory-guided saccades, and antisaccades in [...] Read more.
There is much evidence pointing out eye movement alterations in several neurological diseases. To the best of our knowledge, this is the first video-oculography study describing potential alterations of eye movements in the post-COVID-19 condition. Visually guided saccades, memory-guided saccades, and antisaccades in horizontal axis were measured. In all visual tests, the stimulus was deployed with a gap condition. The duration of the test was between 5 and 7 min per participant. A group of n=9 patients with the post-COVID-19 condition was included in this study. Values were compared with a group (n=9) of healthy volunteers whom the SARS-CoV-2 virus had not infected. Features such as centripetal and centrifugal latencies, success rates in memory saccades, antisaccades, and blinks were computed. We found that patients with the post-COVID-19 condition had eye movement alterations mainly in centripetal latency in visually guided saccades, the success rate in memory-guided saccade test, latency in antisaccades, and its standard deviation, which suggests the involvement of frontoparietal networks. Further work is required to understand these eye movements’ alterations and their functional consequences. Full article
(This article belongs to the Special Issue Wearable Technologies and Applications for Eye Tracking)
Show Figures

Figure 1

19 pages, 821 KiB  
Article
Hybrid FPGA–CPU-Based Architecture for Object Recognition in Visual Servoing of Arm Prosthesis
by Attila Fejér, Zoltán Nagy, Jenny Benois-Pineau, Péter Szolgay, Aymar de Rugy and Jean-Philippe Domenger
J. Imaging 2022, 8(2), 44; https://doi.org/10.3390/jimaging8020044 - 12 Feb 2022
Cited by 3 | Viewed by 3836
Abstract
The present paper proposes an implementation of a hybrid hardware–software system for the visual servoing of prosthetic arms. We focus on the most critical vision analysis part of the system. The prosthetic system comprises a glass-worn eye tracker and a video camera, and [...] Read more.
The present paper proposes an implementation of a hybrid hardware–software system for the visual servoing of prosthetic arms. We focus on the most critical vision analysis part of the system. The prosthetic system comprises a glass-worn eye tracker and a video camera, and the task is to recognize the object to grasp. The lightweight architecture for gaze-driven object recognition has to be implemented as a wearable device with low power consumption (less than 5.6 W). The algorithmic chain comprises gaze fixations estimation and filtering, generation of candidates, and recognition, with two backbone convolutional neural networks (CNN). The time-consuming parts of the system, such as SIFT (Scale Invariant Feature Transform) detector and the backbone CNN feature extractor, are implemented in FPGA, and a new reduction layer is introduced in the object-recognition CNN to reduce the computational burden. The proposed implementation is compatible with the real-time control of the prosthetic arm. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

14 pages, 2620 KiB  
Article
Experimental Analysis of Driver Visual Characteristics in Urban Tunnels
by Song Fang and Jianxiao Ma
Appl. Sci. 2021, 11(9), 4274; https://doi.org/10.3390/app11094274 - 8 May 2021
Cited by 19 | Viewed by 2594
Abstract
Through an urban tunnel-driving experiment, this paper studies the changing trend of drivers’ visual characteristics in tunnels. A Tobii Pro Glasses 2 wearable eye tracker was used to measure pupil diameter, scanning time, and fixation point distribution of the driver during driving. A [...] Read more.
Through an urban tunnel-driving experiment, this paper studies the changing trend of drivers’ visual characteristics in tunnels. A Tobii Pro Glasses 2 wearable eye tracker was used to measure pupil diameter, scanning time, and fixation point distribution of the driver during driving. A two-step clustering algorithm and the data-fitting method were used to analyze the experimental data. The results show that the univariate clustering analysis of the pupil diameter change rate of drivers has poor discrimination because the pupil diameter change rate of drivers in the process of “dark adaptation” is larger, while the pupil diameter change rate of drivers in the process of “bright adaptation” is relatively smooth. The univariate and bivariate clustering results of drivers’ pupil diameters were all placed into three categories, with reasonable distribution and suitable differentiation. The clustering results accurately corresponded to different locations of the tunnel. The clustering method proposed in this paper can identify similar behaviors of drivers at different locations in the transition section at the tunnel entrance, the inner section, and the outer area of the tunnel. Through data-fitting of drivers’ visual characteristic parameters in different tunnels, it was found that a short tunnel, with a length of less than 1 km, has little influence on visual characteristics when the maximum pupil diameter is small, and the percentage of saccades is relatively low. An urban tunnel with a length between 1 and 2 km has a significant influence on visual characteristics. In this range, with the increase in tunnel length, the maximum pupil diameter increases significantly, and the percentage of saccades increases rapidly. When the tunnel length exceeds 2 km, the maximum pupil diameter does not continue to increase. The longer the urban tunnel, the more discrete the distribution of drivers’ gaze points. The research results should provide a scientific basis for the design of urban tunnel traffic safety facilities and traffic organization. Full article
Show Figures

Figure 1

21 pages, 5037 KiB  
Article
Deep-Learning-Based Pupil Center Detection and Tracking Technology for Visible-Light Wearable Gaze Tracking Devices
by Wei-Liang Ou, Tzu-Ling Kuo, Chin-Chieh Chang and Chih-Peng Fan
Appl. Sci. 2021, 11(2), 851; https://doi.org/10.3390/app11020851 - 18 Jan 2021
Cited by 28 | Viewed by 8642
Abstract
In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively [...] Read more.
In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform. Full article
(This article belongs to the Special Issue Applications of Cognitive Infocommunications (CogInfoCom))
Show Figures

Figure 1

2 pages, 39 KiB  
Article
From Lab-Based Studies to Eye-Tracking in Virtual and Real Worlds: Conceptual and Methodological Problems and Solutions. Symposium 4 at the 20th European Conference on Eye Movement Research (Ecem) in Alicante, 20.8.2019
by Ignace T. C. Hooge, Roy S. Hessels, Diederick C. Niehorster, Gabriel J. Diaz, Andrew T. Duchowski and Jeff B. Pelz
J. Eye Mov. Res. 2019, 12(7), 1-2; https://doi.org/10.16910/jemr.12.7.8 - 25 Nov 2019
Cited by 5 | Viewed by 127
Abstract
Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers [...] Read more.
Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers was generally a highly technical affair. As such, mobile eye-tracking research was not feasible for most labs. Nowadays, many mobile eye trackers are available from eye-tracking manufacturers (e.g., Tobii, Pupil labs, SMI, Ergoneers) and various implementations in virtual/augmented reality have recently been released.The wide availability has caused the number of publications using a mobile eye tracker to increase quickly. Mobile eye tracking is now applied in vision science, educational science, developmental psychology, marketing research (using virtual and real supermarkets), clinical psychology, usability, architecture, medicine, and more. Yet, transitioning from lab-based studies where eye trackers are fixed to the world to studies where eye trackers are fixed to the head presents researchers with a number of problems. These problems range from the conceptual frameworks used in world-fixed and head-fixed eye tracking and how they relate to each other, to the lack of data quality comparisons and field tests of the different mobile eye trackers and how the gaze signal can be classified or mapped to the visual stimulus. Such problems need to be addressed in order to understand how world-fixed and head-fixed eye-tracking research can be compared and to understand the full potential and limits of what mobile eye-tracking can deliver. In this symposium, we bring together presenting researchers from five different institutions (Lund University, Utrecht University, Clemson University, Birkbeck University of London and Rochester Institute of Technology) addressing problems and innovative solutions across the entire breadth of mobile eye-tracking research. Hooge, presenting Hessels et al. paper, focus on the definitions of fixations and saccades held by researchers in the eyemovement field and argue how they need to be clarified in order to allow comparisons between world-fixed and head-fixed eye-tracking research.—Diaz et al. introduce machine-learning techniques for classifying the gaze signal in mobile eye-tracking contexts where head and body are unrestrained. Niehorster et al. compare data quality of mobile eye trackers during natural behavior and discuss the application range of these eye trackers. Duchowski et al. introduce a method for automatically mapping gaze to faces using computer vision techniques. Pelz et al. employ state-of-the-art techniques to map fixations to objects of interest in the scene video and align grasp and eye-movement data in the same reference frame to investigate the guidance of eye movements during manual interaction. Full article
20 pages, 7042 KiB  
Article
FaceLooks: A Smart Headband for Signaling Face-to-Face Behavior
by Taku Hachisu, Yadong Pan, Soichiro Matsuda, Baptiste Bourreau and Kenji Suzuki
Sensors 2018, 18(7), 2066; https://doi.org/10.3390/s18072066 - 28 Jun 2018
Cited by 13 | Viewed by 7118
Abstract
Eye-to-eye contact and facial expressions are key communicators, yet there has been little done to evaluate the basic properties of face-to-face; mutual head orientation behaviors. This may be because there is no practical device available to measure the behavior. This paper presents a [...] Read more.
Eye-to-eye contact and facial expressions are key communicators, yet there has been little done to evaluate the basic properties of face-to-face; mutual head orientation behaviors. This may be because there is no practical device available to measure the behavior. This paper presents a novel headband-type wearable device called FaceLooks, used for measuring the time of the face-to-face state with identity of the partner, using an infrared emitter and receiver. It can also be used for behavioral healthcare applications, such as for children with developmental disorders who exhibit difficulties with the behavior, by providing awareness through the visual feedback from the partner’s device. Two laboratory experiments showed the device’s detection range and response time, tested with a pair of dummy heads. Another laboratory experiment was done with human participants with gaze trackers and showed the device’s substantial agreement with a human observer. We then conducted two field studies involving children with intellectual disabilities and/or autism spectrum disorders. The first study showed that the devices could be used in the school setting, observing the children did not remove the devices. The second study showed that the durations of children’s face-to-face behavior could be increased under a visual feedback condition. The device shows its potential to be used in therapy and experimental fields because of its wearability and its ability to quantify and shape face-to-face behavior. Full article
Show Figures

Figure 1

22 pages, 7488 KiB  
Article
Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement
by Weiyuan Pan, Dongwook Jung, Hyo Sik Yoon, Dong Eun Lee, Rizwan Ali Naqvi, Kwan Woo Lee and Kang Ryoung Park
Sensors 2016, 16(9), 1396; https://doi.org/10.3390/s16091396 - 31 Aug 2016
Cited by 7 | Viewed by 6694
Abstract
Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing [...] Read more.
Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

19 pages, 1778 KiB  
Article
Enhanced Perception of User Intention by Combining EEG and Gaze-Tracking for Brain-Computer Interfaces (BCIs)
by Jong-Suk Choi, Jae Won Bang, Kang Ryoung Park and Mincheol Whang
Sensors 2013, 13(3), 3454-3472; https://doi.org/10.3390/s130303454 - 13 Mar 2013
Cited by 22 | Viewed by 7636
Abstract
Speller UI systems tend to be less accurate because of individual variation and the noise of EEG signals. Therefore, we propose a new method to combine the EEG signals and gaze-tracking. This research is novel in the following four aspects. First, two wearable [...] Read more.
Speller UI systems tend to be less accurate because of individual variation and the noise of EEG signals. Therefore, we propose a new method to combine the EEG signals and gaze-tracking. This research is novel in the following four aspects. First, two wearable devices are combined to simultaneously measure both the EEG signal and the gaze position. Second, the speller UI system usually has a 6 × 6 matrix of alphanumeric characters, which has disadvantage in that the number of characters is limited to 36. Thus, a 12 × 12 matrix that includes 144 characters is used. Third, in order to reduce the highlighting time of each of the 12 × 12 rows and columns, only the three rows and three columns (which are determined on the basis of the 3 × 3 area centered on the user’s gaze position) are highlighted. Fourth, by analyzing the P300 EEG signal that is obtained only when each of the 3 × 3 rows and columns is highlighted, the accuracy of selecting the correct character is enhanced. The experimental results showed that the accuracy of proposed method was higher than the other methods. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Back to TopTop