Next Article in Journal
Efficient FPGA Implementation of Convolutional Neural Networks and Long Short-Term Memory for Radar Emitter Signal Recognition
Previous Article in Journal
Unmanned Aerial Vehicle Cooperative Data Dissemination Based on Graph Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Cognitive Vergence Recorded with a Webcam-Based Eye-Tracker during an Oddball Task in an Elderly Population

1
Department Cognition, Development and Educational Psychology, University of Barcelona, 08035 Barcelona, Spain
2
Bioinformatics and Biomedical Signals Laboratory, Polytechnical University of Catalonia, 08028 Barcelona, Spain
3
Assesment Unit of Cognition and Attention in Learning, Psychology Clinic, 08035 Barcelona, Spain
4
Braingaze S.L., 08302 Mataró, Spain
5
Institute of Neurosciences (UBNeuro), University of Barcelona, 08035 Barcelona, Spain
6
Institut de Recerca Sant Joan de Déu (IRSJD), 08950 Barcelona, Spain
7
Catalan Institution for Research and Advanced Studies (ICREA), 08010 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(3), 888; https://doi.org/10.3390/s24030888
Submission received: 17 November 2023 / Revised: 24 January 2024 / Accepted: 25 January 2024 / Published: 30 January 2024
(This article belongs to the Section Biomedical Sensors)

Abstract

:
(1) Background: Our previous research provides evidence that vergence eye movements may significantly influence cognitive processing and could serve as a reliable measure of cognitive issues. The rise of consumer-grade eye tracking technology, which uses sophisticated imaging techniques in the visible light spectrum to determine gaze position, is noteworthy. In our study, we explored the feasibility of using webcam-based eye tracking to monitor the vergence eye movements of patients with Mild Cognitive Impairment (MCI) during a visual oddball paradigm. (2) Methods: We simultaneously recorded eye positions using a remote infrared-based pupil eye tracker. (3) Results: Both tracking methods effectively captured vergence eye movements and demonstrated robust cognitive vergence responses, where participants exhibited larger vergence eye movement amplitudes in response to targets versus distractors. (4) Conclusions: In summary, the use of a consumer-grade webcam to record cognitive vergence shows potential. This method could lay the groundwork for future research aimed at creating an affordable screening tool for mental health care.

1. Introduction

Our eyes, constantly in motion, play a pivotal role in visual information processing. Even when our gaze is steady, tiny eye movements, known as fixational or micro saccades, are crucial. These movements not only prevent the loss of conscious vision [1], but also aid in attention shifts [2,3], enhance visual sensitivity [4,5], and improve visual acuity [6,7].
Vergence, another form of eye movement [8,9,10,11,12], involves the eyes moving in opposite directions to achieve and maintain monocular vision. Our research has discovered a new role for vergence eye movements in cognitive processing. We observed that the eyes briefly converge following the presentation of a visual stimulus [13]. These vergence responses are more pronounced when the stimulus is attended, perceived, or retained in memory (e.g., see [13,14,15]). This indicates a potential role of vergence in attention. Additional evidence comes from observations that individuals with attentional difficulties exhibit poor vergence responses during an attentional task [16]. We refer to this phenomenon as cognitive vergence. However, some authors suggest that vergence estimates from infrared eye trackers represent both rotation of the eye and pupil dynamics [17].
Cognitive vergence responses appear early and increase as the processing of a stimulus reaches a level where their strength correlates with behavioral performance. This suggests that vergence responses could predict the extent to which a stimulus is processed. Therefore, measuring cognitive vergence could potentially serve as an objective marker for detecting cognitive problems. Indeed, AI classifier models using cognitive vergence responses as input have successfully identified patients with ADHD [16] and Mild Cognitive Impairment (MCI) [18]. In patients with MCI, attended stimuli are accompanied by a weak enhancement, whereas Alzheimer patients show no difference in vergence responses to attended and unattended stimuli. Such models can even predict the risk of patients with MCI developing Alzheimer’s disease [19].
Eye gaze tracking is typically performed using specially designed devices that employ infrared light to detect pupil size and center and estimate gaze position. This technology often involves relatively expensive devices, limiting its widespread adoption and primarily confining its use to research and assistive applications. However, more applications are being developed, such as those for artistic purposes [20] and human–robot interfaces [21,22]. Additionally, new methods are emerging [23]. Some of them utilize advanced imaging techniques in the visible light spectrum to estimate gaze position using the iris of the eye [24]. This advancement paves the way for developing consumer-grade eye tracking technology that could potentially be used to detect mental health conditions by measuring cognitive vergence. In this study, we explored the feasibility of such a technique by testing patients with MCI. Participants performed a brief computerized visual oddball paradigm while cognitive vergence eye movements were measured from images recorded by a webcam. Eye positions were also recorded simultaneously with a remote infrared-based pupil eye tracker.
Our results indicate that a differential vergence response to the oddball task stimuli (targets and distractors) can be measured with both a webcam-based iris tracker and infrared pupil tracker. This signifies that vergence estimates captured from infrared and webcam-based eye trackers reflect eye rotation to a greater extent than they reflect a misinterpretation of pupil diameter changes. Although the absolute magnitude of the vergence angle varied between trackers, the modulation pattern and index of the vergence responses were similar for both trackers. The findings imply that employing a consumer-grade webcam could be a viable method for capturing cognitive vergence. This holds promise for future research aimed at creating an affordable screening instrument for mental health care.

2. Materials and Methods

2.1. Subjects

We conducted our study with participants recruited from a private day care center for the elderly in Barcelona. The clinical professional at the care center extended invitations to volunteers, and a total of 28 subjects (9 men and 19 women; mean (SD) age: 70.3 (6.8) years) willingly participated in the study. The Montreal Cognitive Assessment (MoCA) was administered to all participants to evaluate their cognitive abilities. The inclusion criterion was set based on MoCA score ranging between 18 and 28 out of a possible 30 points.
The exclusion criteria were as follows: (1) history of neurological disease with clinically relevant impact on cognition (e.g., cerebrovascular disease); (2) severe psychiatric disorder; (3) presence of relevant visual problems; and (4) problems for understanding spoken or written Spanish language.

2.2. Ethical Statement

Participants and their relatives received detailed instructions for the experiments. Prior to enrollment, patients or family members signed a written informed consent for their participation in accordance with the Declaration of Helsinki. This study was approved by the ethics committees of the University of Barcelona.

2.3. Apparatus

We used the BGaze software (version 1.17.2; Braingaze SL, Mataró, Spain) on a laptop (MSI CX62 6QD) to present the visual stimuli and record eye position data. The faces of the participants were recorded with the integrated webcam (HD type, 30 fps, 720 p) while performing the task. We chose a webcam of standard quality based on our pilot testing, which demonstrated satisfactory results. The resolution of the screen (HD 15.6”) was 1366 × 768 pixels and the remote eye tracker (ET) used was an X2-30 (30 Hz, Tobii Technology AB, Stockholm, Sweden).

2.4. Experimental Procedure

The task was performed in the living room of each patient’s home in order to have conditions in an operational setting ensuring sufficient ambient light without reflecting light sources. However, due to variations in lighting in each room, standard conditions could not be established (Figure 1B). The laptop was positioned on a table with the screen slightly inclined so that the entire face was captured by the webcam. The subjects were seated approximately 50 cm from the screen on which the stimuli were presented. No chin rest was used so patients could freely move their heads, and they were allowed to wear corrective lenses.

2.5. Paradigm

The experiment employed a visual oddball paradigm, comprising a sequence of 100 trials. Each trial began with a gray screen (mask) displayed for 2000 ms, followed by a centrally presented visual stimulus for an equal duration (Figure 1). This stimulus consisted of an 11-character string of letters, randomly selected and varying in case. To avoid bias, these strings did not form acronyms or meaningful words. The strings were identical except for their color. In 80% of the trials, all characters were blue, while in the remaining 20%, they were red. Participants were instructed to focus on the screen and press a key only when the characters were red. Thus, red character strings served as targets, and blue ones as distractors. The stimuli were presented randomly. The task, lasting approximately 6 min, involved recording pupil positions using a remote eye tracker and capturing the participant’s face with a webcam.

2.6. Webcam-Based Eye Tracking (WC)

To obtain cognitive vergence measurements from the webcam images, we used the model described in [25], which captures the 3D head poses, facial expression deformations, and 3D eye gaze states using a single RGB camera. The whole system consists of several components. First, important facial features are automatically detected and tracked, and the optical flow of each pixel in the face region is computed. Then, a data-driven 3D facial reconstruction technique is performed to reconstruct the 3D head pose and large-scale expression deformations using multi-linear expression deformation models. A pixel classifier then automatically annotates the iris and pupil pixels in the eye region, which is bounded by detected facial landmarks in the eye region. Additionally, the outer contour of the iris (i.e., the limbus) is extracted to further improve the robustness and accuracy of the gaze tracker. A DCNN-based segmentation method is used to perform a frame-by-frame pixel extraction of the iris including the pupil region. The convolutional neural network is used to predict the probability that each pixel belongs to the entire iris, including the pupil region. To track the gaze states in the video sequences, the geometric shape and 3D position of the eyeballs and the radius of the iris region together with the limbus are estimated.

2.7. Cognitive Vergence Calculation

Data points from the infrared eye tracker that did not correspond to valid pupil detections (i.e., whenever the validity score given by the eye tracker software had a non-zero value) were marked out. Trials with too many invalid data points (15 points or more) were discarded. The exclusion rate was 33%. Finally, interpolation was used to create sequences of evenly spaced points. In the case of the webcam eye tracker, all trials were included in the data analysis.
To calculate vergence changes, we transformed the coordinates of the left and right eye provided by the eye tracker into angular values. Rather than the vergence angle itself, γ (for example), we focus on the relative vergence modulation V ( t ) γ ( t ) γ o max | γ ( t ) γ 0 | , where γ0 represents the γ value at stimulus onset, and the indicated maximum was taken for all absolute values of the difference γ(t) − γ0 in the examined time window of each trial. The subtraction of the initial values from each response served the purpose of obtaining relative changes. Subsequently, all V ( t ) sequences coming from trials in the same condition (target, distractor) were averaged to obtain ‘mean V ( t ) ’ curves.

2.8. Data and Statistical Analysis

The peak vergence response was evaluated as the mean in the 400–433 ms window. Delayed responses were calculated as the average response strength over the window 600–1250 ms after stimulus onset. Modulation indices were calculated as mi = (T − D)/(T + D), where T(D) is the mean of the window-averaged vergence responses for all target (distractor) trials. For both tracker methods, the window limits were 300–600 ms. For statistical analysis, we performed a series of comparisons based on the two-tailed t-test of all accepted trials or subjects.

3. Results

MOCA scores ranged from 11 to 25 (mean ± std: 16.8 ± 3.9) out of a possible 30, indicating that subjects had cognitive impairment. Three subjects were excluded from further analysis of infrared eye tracker data because they did not provide valid pupil recordings. In total, there were 1512 distractor trials and 350 target trials. Removing the same three participants from the webcam data did not significantly change the results. Therefore, we decided to include all 28 subjects in the analysis of the webcam data. The total number of distractor trials was 2240 and there were 560 target trials, but only 2218 and 551 were correctly recorded, respectively.

3.1. Cognitive Vergence Responses

Iris positions were extracted from the webcam images to calculate vergence responses separately for the target and distractor conditions. The average target response of all participants across trials shows a clear increase in vergence angle starting about 300 ms after stimulus onset and peak at about 450 ms, followed by a delay response (Figure 2). The average peak response to targets (mean ± std: 0.036 ± 0.107) was stronger than the initial—i.e., 0–200 ms—averaged vergence responses (mean ± std: −0.002 ± 0.060, t-test result t = 1.60, p = 0.12). The average target delay response (mean ± std: 0.021 ± 0.083) was similar to the initial response strength (t = 1.19, p = 0.24). Vergence eye movements during distractor trials showed neither a peak response (mean ± std: −0.013 ± 0.062) nor a clear delay response (mean ± std: 0.001 ± 0.056), but a slightly significant response increase of 3.3 × 10−5 deg/ms was visible. See also Figure 3 and Table 1.
These findings suggest that webcam-based eye tracking can be used to assess cognitive function in individuals with MCI by measuring vergence eye movements during a visual oddball paradigm.

3.2. Infrared Eye Tracker (ET)

Simultaneously with the webcam recording, we recorded vergence responses with a remote infrared-based eye tracker, allowing us to compare both methods. The average vergence response to targets recorded with the infrared eye tracker showed a peak response around 500 ms followed by a delay response (Figure 2B). The results show that 52% of the subjects had a stronger peak response to targets than to distractors. See Figure 4. The average peak response to targets (mean ± std: 0.076 ± 0.534) was significantly (t-test result t = −2.24, p = 0.03) stronger than the average response to distractors (mean ± std: 0.005 ± 0.530). The delay response in the distractor condition showed a strong increase starting at about 600 ms and reached a maximum at 1100 ms.

3.3. Comparison of Webcam-Based versus Infrared Eye Tracking

The average curves for initial and peak responses do not significantly differ (t-test: t = 0.620 p = 0.535 and t = −0.311 p = 0.756, respectively) between webcam-based and infrared-based eye tracking but the delayed responses are significantly different (t = 3.798, p ≈ 1.4 × 10−4).
To compare the differential vergence response recorded by the webcam-based eye tracker with that of the infrared-based eye tracker, we plotted the modulation indices per subject. The window for calculating the modulation index was 300–600 ms. The results show that 75.0% and 66.6% of the subjects showed a positive modulation index, i.e., the responses to targets were stronger than those to distractors when recorded with the infrared-based eye tracker and the webcam-based eye tracker, respectively (Figure 5). However, there was a positive modulation in the webcam eye tracker in 28.4% of the participants (N = 6), while there was a negative modulation in the infrared eye tracker. Seven participants showed a positive modulation index in the infrared tracking but a negative one with webcam tracking, and nine participants showed positive modulation in both trackers. The average modulation index (mi) across subjects for infrared-based eye tracking (miET) was 1.06 ± 2.69, while the average modulation index for webcam-based eye tracking (miWC) was 0.66 ± 4.51. However, this difference was not statistically significant (t = 0.40, p = 0.70), suggesting that both methods can effectively capture the differential vergence response.

4. Discussion

In this study, we compared cognitive vergence responses recorded with a webcam-based eye tracker to those captured with a remote infrared-based eye tracker during an oddball task. Participants were simultaneously tracked using a remote infrared-based pupil eye tracker and a consumer-grade webcam. Both tracking methods effectively captured vergence eye movements and demonstrated robust cognitive vergence responses, where participants exhibited larger vergence eye movement amplitudes in response to targets compared to distractors. Although both trackers exhibited a similar temporal pattern of vergence responses, the absolute response amplitudes were smaller when recorded with the webcam-based tracker, particularly during the delay period, possibly due to differences in recording and computation methods. Additionally, the standard deviation of the responses from the webcam eye tracker is larger than that of the infrared eye tracker, indicating less precise responses. This aligns with previous studies, demonstrating that while webcam eye tracking serves as an alternative to infrared eye tracking, its spatial resolution remains inferior [26].
Yet, both the webcam-based eye tracker and the infrared-based eye tracker produced stronger responses to targets than to distractors, which is in agreement with our previous study showing a differential vergence response in an elderly population [18]. This differential response is typically present in cognitively healthy subjects but is reduced or absent in those with cognitive impairment. Given that all participants in our current study had a history of cognitive impairment, as indicated by their MoCA scores, this could explain why some did not exhibit a differential vergence response. We conducted this study in an uncontrolled environment without the use of a chin rest, which may have introduced additional noise into the eye tracking data. The sensitivity of the trackers to noise is unlikely to be identical due to their different signal detection methods. Despite differences in absolute magnitude, both tracking methods yielded similar temporal patterns of cognitive vergence responses and captured a differential response. This indicates that webcam eye tracking technology may be capable of detecting cognitive decline. An early intervention in response to MCI with pharmaceutical treatment, cognitive therapy, or an adoption of a healthy lifestyle may help to prevent or delay the onset of Alzheimer’s disease. However, available biomarker tools are expensive and invasive, and more accessible solutions are needed. In line with previous reports, the assessment of vergence responses could be a potential candidate to consider for further clinical research in developing an objective, low-cost marker tool for consumers to monitor their cognitive health. In out earlier studies [27], we demonstrated that cognitive vergence is disrupted in individuals with neurodevelopmental disorders such as ADHD and ASD. Consequently, our technology holds promise for potential applications, including the detection of these disorders.

Cognitive Vergence and Pupil Responses?

The neural mechanisms that govern vergence and pupil size share some overlap, leading to a situation where a vergence eye movement can elicit a pupil response [28]. This interplay results in a complex behavioral relationship (see [13]). Infrared-based eye trackers estimate gaze position using pupil size, center, and corneal reflectance. Some suggest that these metrics may introduce errors in estimating eye movement amplitude [17,29,30,31]. They argue that cognitive vergence may represent pupil dynamics in addition to actual vergence movement [17]. However, other studies indicate that infrared-based tracking is comparable to the search coil method for measuring small fixational eye movements [32]. They also suggest that pupil-related errors may become negligible at viewing distances beyond 50 cm [27], which aligns with the distance we used in our study.
Our study employed a webcam-based eye tracker that estimates gaze by detecting the iris area and limbus. These measurements are independent of pupil detection and remain unaffected by changes in pupil size. We obtained a clear differential vergence response with the webcam-based eye tracker. This lends further support to the notion that cognitive vergence results from the rotation of the eyeball and artifact or error in measurements of pupil size or corneal reflection may not be as influential in determining vergence estimates, as previously suggested.

5. Conclusions

In conclusion, our findings suggest that a consumer-grade webcam holds promise as a potential tool for recording cognitive vergence. To establish it as an affordable screening aid, further research is required to validate its clinical effectiveness and demonstrate its applicability in the realm of mental health care. Potential future applications could involve the development of a consumer-oriented tool for regularly monitoring mental health conditions.

6. Patents

The IP of the method to detect cognitive disorders is protected with a patent.

Author Contributions

Conceptualization, H.S.; methodology, O.L.; software, O.L.; validation, A.R., O.L. and M.S.P.; formal analysis, A.R.; investigation, M.S.P.; resources, H.S.; data curation, A.R.; writing—original draft preparation, H.S.; writing—review and editing, A.R.; visualization, A.R. and O.L.; supervision, H.S.; project administration, H.S.; funding acquisition, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by AGAUR, Spain, grant number 2018 DI 75 and by a grant from Spanish Ministry of Science (PGC2018-096074-B-I00). This research was also funded by PDC2022-133054-I00 financed by MCIN/AEI/10.13039/501100011033 and by the EU NextGenerationEU/PRTR.Sensors 24 00888 i001

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of University of Barcelona (protocol code IRB00003099, 14 November 2019) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Data within this study can be requested from H.S. via email.

Conflicts of Interest

Hans Supèr is the co-founder of Braingaze S.L., Spain. Oleksii Leonovych was employee of Braingaze S.L. Spain. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Correction Statement

This article has been republished with a minor correction to the Funding statement. This change does not affect the scientific content of the article.

References

  1. Martinez-Conde, S.; Macknik, S.L.; Troncoso, X.G.; Dyar, T. Microsaccades counteract visual fading during fixation. Neuron 2006, 49, 297–305. [Google Scholar] [CrossRef]
  2. Engbert, R.; Kliegl, R. Microsaccades uncover the orientation of covert attention. Vis. Res. 2003, 43, 1035–1045. [Google Scholar] [CrossRef]
  3. Hafed, Z.M.; Clark, J.J. Microsaccades as an overt measure of covert attention shifts. Vis. Res. 2002, 42, 2533–2545. [Google Scholar] [CrossRef] [PubMed]
  4. Bonneh, Y.S.; Adini, Y.; Polat, U. Contrast sensitivity revealed by microsaccades. J. Vis. 2015, 15, 11. [Google Scholar] [CrossRef] [PubMed]
  5. Scholes, C.; McGraw, P.V.; Nyström, M.; Roach, N.W. Fixational eye movements predict visual sensitivity. Proc. R. Soc. B Biol. Sci. 2015, 282, 20151568. [Google Scholar] [CrossRef] [PubMed]
  6. Ko, H.-K.; Poletti, M.; Rucci, M. Microsaccades precisely relocate gaze in a high visual acuity task. Nat. Neurosci. 2010, 13, 1549–1553. [Google Scholar] [CrossRef] [PubMed]
  7. Rucci, M.; Desbordes, G. Contributions of fixational eye movements to the discrimination of briefly presented stimuli. J. Vis. 2003, 19, 852–864. [Google Scholar] [CrossRef] [PubMed]
  8. Collewijn, H.; Erkelens, C.J.; Steinman, R.M. Trajectories of the human binocular fixation point during Conjugate and non-conjugate gaze-shifts. Vis. Res. 1997, 37, 1049–1069. [Google Scholar] [CrossRef] [PubMed]
  9. Mon-Williams, M.; Tresilian, J.R.; Roberts, A. Vergence provides veridical depth perception from horizontal retinal image disparities. Exp. Brain Res. 2000, 133, 407–413. [Google Scholar] [CrossRef] [PubMed]
  10. Richard, W.; Miller, J.F. Convergence as a cue to depth. Percept. Psychophys. 1969, 5, 317–320. [Google Scholar] [CrossRef]
  11. Ritter, M. Effect of disparity and viewing distance on perceived depth. Percept. Psychophys. 1977, 22, 400–407. [Google Scholar] [CrossRef]
  12. Viguier, A.; Clément, G.; Trotter, Y. Distance perception within near visual space. Perception 2001, 30, 115–124. [Google Scholar] [CrossRef] [PubMed]
  13. Solé Puig, M.; Pérez Zapata, L.; Aznar-Casanova, J.A.; Supèr, H. A role of eye vergence in covert attention. PLoS ONE 2013, 8, e52955. [Google Scholar] [CrossRef] [PubMed]
  14. Solé Puig, M.; Pallarés, J.M.; Perez Zapata, L.; Puigcerver, L.; Cañete Crespillo, J.; Supèr, H. Attentional selection accompanied by eye vergence as revealed by event-related brain potentials. PLoS ONE 2016, 11, e0167646. [Google Scholar] [CrossRef] [PubMed]
  15. Solé Puig, M.; Romeo, A.; Cañete Crespillo, J.; Supèr, H. Eye vergence responses during a visual memory task. Neuroreport 2017, 28, 123–127. [Google Scholar] [CrossRef]
  16. Varela Casal, P.; Esposito, F.L.; Morata Martínez, I.; Capdevila, A.; Solé Puig, M.; de la Osa, N.; Ezpeleta, L.; Perera i Lluna, A.; Faraone, S.V.; Ramos-Quiroga, J.A.; et al. Clinical Validation of Eye Vergence as an Objective Marker for Diagnosis of ADHD in Children. J. Atten. Disord. 2018, 23, 599–614. [Google Scholar] [CrossRef]
  17. Hooge, I.T.; Hessels, R.S.; Nyström, M. Do pupil-based binocular video eye trackers reliably measure vergence? Vis. Res. 2019, 156, 1–9. [Google Scholar] [CrossRef]
  18. Jiménez, E.C.; Sierra-Marcos, A.; Romeo, A.; Hashemi, A.; Leonovych, O.; Bustos Valenzuela, P.; Solé Puig, M.; Supèr, H. Altered Vergence Eye Movements and Pupil Response of Patients with Alzheimer’s Disease and Mild Cognitive Impairment During an Oddball Task. J. Alzheimer’s Dis. 2021, 82, 421–433. [Google Scholar] [CrossRef]
  19. Hashemi, A.; Leonovych, O.; Jiménez, E.C.; Sierra-Marcos, A.; Romeo, A.; Bustos Valenzuela, P.; Solé Puig, M.; Moliner, J.L.; Tubau, E.; Supèr, H. Classification of MCI patients using vergence eye movements and pupil responses obtained during a visual oddball test. Aging Health Res. 2023, 3, 100121. [Google Scholar] [CrossRef]
  20. Scalera, L.; Seriani, S.; Gallina, P.; Lentini, M.; Gasparetto, A. Human–Robot Interaction through Eye Tracking for Artistic Drawing. Robotics 2021, 10, 54. [Google Scholar] [CrossRef]
  21. Sharma, V.K.; Biswas, P. Gaze Controlled Safe HRI for Users with SSMI, Gaze Controlled Safe HRI for Users with SSMI. In Proceedings of the International Conference on Advanced Robotics (ICAR), Ljubljana, Slovenia, 6–10 December 2021; pp. 913–918. [Google Scholar] [CrossRef]
  22. Sharma, V.K.; Murthy, L.R.D.; Biswas, P. Comparing Two Safe Distance Maintenance Algorithms for a Gaze-Controlled HRI Involving Users with SSMI. ACM Trans. Access. Comput. 2022, 15, 1–23. [Google Scholar] [CrossRef]
  23. Shi, Y.; Yang, P.; Lei, R.; Liu, Z.; Dong, X.; Tao, X.; Chu, X.; Wang, Z.L.; Chen, X. Eye tracking and eye expression decoding based on transparent, flexible and ultra-persistent electrostatic interface. Nat. Commun. 2023, 14, 3315. [Google Scholar] [CrossRef] [PubMed]
  24. Valenti, R.; Staiano, J.; Sebe, N.; Gevers, T. Webcam-Based Visual Gaze Estimation. In Proceedings of the 15th International Conference on Image Analysis and Processing—ICIAP 2009, Vietri sul Mare, Italy, 8–11 September 2009; Volume 5716. [Google Scholar] [CrossRef]
  25. Wang, Z.; Chai, J.; Xia, S. Realtime and Accurate 3D Eye Gaze Capture with DCNN-based Iris and Pupil Segmentation. IEEE Trans. Vis. Comput. Graph. 2019, 27, 190–203. [Google Scholar] [CrossRef] [PubMed]
  26. Vos, M.; Minor, S.; Ramchand, G.C. Comparing infrared and webcam eye tracking in the Visual World Paradigm. Glossa Psycholinguist. 2022, 1, 1–37. [Google Scholar] [CrossRef]
  27. Jaschinski, W. Pupil size affects measures of eye position in video eye tracking: Implications for recording vergence accuracy. J. Eye Mov. Res. 2016, 9, 1–14. [Google Scholar]
  28. Feil, M.; Moser, B.; Abegg, M. The interaction of pupil response with the vergence system. Graefe’s Arch. Clin. Exp. Ophthalmol. 2017, 255, 2247–2253. [Google Scholar] [CrossRef] [PubMed]
  29. Drewes, J.; Zhu, W.; Hu, Y.; Hu, X. Smaller is better: Drift in gaze measurements due to pupil dynamics. PLoS ONE 2014, 9, e111197. [Google Scholar] [CrossRef] [PubMed]
  30. Holmqvist, K.; Blignaut, P. Small eye movements cannot be reliably measured by video-based P-CR eye-trackers. Behav. Res. Methods 2020, 52, 2098–2121. [Google Scholar] [CrossRef]
  31. Hooge, I.T.C.; Holmqvist, K.; Nyström, M. Are video-based pupil-CR eye trackers suitable for studying detailed dynamics of eye movements? Vis. Res. 2016, 128, 6–18. [Google Scholar] [CrossRef]
  32. McCamy, M.B.; Otero Millan, J.; Leigh, R.J.; King, S.A.; Schneider, R.M.; Macknik, S.L.; Martinez Conde, S. Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique. PLoS ONE 2015, 10, e0128428. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the oddball task (A) and an example of the set-up (B). A series of letters is presented for two seconds. In 80% of the trials, the letters were in blue color, and in the remaining 20% (oddball), the letters were in red color.
Figure 1. Schematic representation of the oddball task (A) and an example of the set-up (B). A series of letters is presented for two seconds. In 80% of the trials, the letters were in blue color, and in the remaining 20% (oddball), the letters were in red color.
Sensors 24 00888 g001
Figure 2. Average vergence eye responses to target and distractor stimuli of an oddball paradigm recorded with the webcam (WC, (A)) and IR (ET, (B)) eye tracker. Responses to targets are depicted in magenta and red traces and to distractors in (light) blue traces.
Figure 2. Average vergence eye responses to target and distractor stimuli of an oddball paradigm recorded with the webcam (WC, (A)) and IR (ET, (B)) eye tracker. Responses to targets are depicted in magenta and red traces and to distractors in (light) blue traces.
Sensors 24 00888 g002
Figure 3. Scatter plot showing peak responses to targets versus peak responses to distractor, from webcam data. X axis: averaged vergence response to distractors; y axis: averaged vergence response to targets.
Figure 3. Scatter plot showing peak responses to targets versus peak responses to distractor, from webcam data. X axis: averaged vergence response to distractors; y axis: averaged vergence response to targets.
Sensors 24 00888 g003
Figure 4. Scatter plot of peak responses, recorded with the remote infrared eye tracker, to targets versus distractors (x axis: averaged vergence response to distractors; y axis: averaged vergence response to targets).
Figure 4. Scatter plot of peak responses, recorded with the remote infrared eye tracker, to targets versus distractors (x axis: averaged vergence response to distractors; y axis: averaged vergence response to targets).
Sensors 24 00888 g004
Figure 5. Scatter plot showing the modulation indices of vergence responses recorded by the webcam-based eye tracker (miWC) compared to those recorded by the infrared-based eye tracker (miET), per subject.
Figure 5. Scatter plot showing the modulation indices of vergence responses recorded by the webcam-based eye tracker (miWC) compared to those recorded by the infrared-based eye tracker (miET), per subject.
Sensors 24 00888 g005
Table 1. Table summarizing the obtained results with the webcam eye tracker.
Table 1. Table summarizing the obtained results with the webcam eye tracker.
Initial ResponsePeak ResponseDelay Response
Target−0.002 ± 0.0600.036 ± 0.1070.021 ± 0.083
Distractor−0.004 ± 0.044−0.013 ± 0.0620.001 ± 0.056
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Romeo, A.; Leonovych, O.; Solé Puig, M.; Supèr, H. Cognitive Vergence Recorded with a Webcam-Based Eye-Tracker during an Oddball Task in an Elderly Population. Sensors 2024, 24, 888. https://doi.org/10.3390/s24030888

AMA Style

Romeo A, Leonovych O, Solé Puig M, Supèr H. Cognitive Vergence Recorded with a Webcam-Based Eye-Tracker during an Oddball Task in an Elderly Population. Sensors. 2024; 24(3):888. https://doi.org/10.3390/s24030888

Chicago/Turabian Style

Romeo, August, Oleksii Leonovych, Maria Solé Puig, and Hans Supèr. 2024. "Cognitive Vergence Recorded with a Webcam-Based Eye-Tracker during an Oddball Task in an Elderly Population" Sensors 24, no. 3: 888. https://doi.org/10.3390/s24030888

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop