Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (23)

Search Parameters:
Keywords = visual learning
Page = 2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 605 KB  
Article
Integrating Cognitive Factors and Eye Movement Data in Reading Predictive Models for Children with Dyslexia and ADHD-I
by Norberto Pereira, Maria Armanda Costa and Manuela Guerreiro
J. Eye Mov. Res. 2023, 16(4), 1-23; https://doi.org/10.16910/jemr.16.4.6 - 21 Mar 2024
Cited by 3 | Viewed by 1686
Abstract
This study reports on several specific neurocognitive processes and eye-tracking predictors of reading outcomes for a sample of children with Developmental Dyslexia (DD) and At-tention-Deficit/Hyperactivity Disorder – inattentive subtype (ADHD-I) compared to typical readers. Participants included 19 typical readers, 21 children diagnosed with [...] Read more.
This study reports on several specific neurocognitive processes and eye-tracking predictors of reading outcomes for a sample of children with Developmental Dyslexia (DD) and At-tention-Deficit/Hyperactivity Disorder – inattentive subtype (ADHD-I) compared to typical readers. Participants included 19 typical readers, 21 children diagnosed with ADHD-I and 19 children with DD. All participants were attending 4th grade and had a mean age of 9.08 years. The psycholinguistic profile of each group was assessed using a battery of neuropsy-chological and linguistic tests. Participants were submitted to a silent reading task with lex-ical manipulation of the text. Multinomial logistic regression was conducted to evaluate the predictive capability of developing dyslexia or ADHD-I based on the following measures: (a) a linguistic model that included measures of phonological awareness, rapid naming, and reading fluency and accuracy; (b) a cognitive neuropsychological model that included measures of memory, attention, visual processes, and cognitive or intellectual functioning, and (c) an additive model of lexical word properties with manipulation of word-frequency and word-length effects through eye-tracking. The additive model in conjunction with the neuropsychological model classification improved the prediction of who develops dyslexia or ADHD-I having as baseline normal readers. Several of the neuropsychological and eye-tracking variables have power to predict the degree of reading outcomes in children with learning disabilities. Full article
20 pages, 7618 KB  
Article
Design and Evaluation of an Asynchronous VR Exploration System for Architectural Design Discussion Content
by Hsuan-Ming Chang, Ting-Wei Hsu, Ming-Han Tsai, Sabarish V. Babu and Jung-Hong Chuang
Virtual Worlds 2024, 3(1), 1-20; https://doi.org/10.3390/virtualworlds3010001 - 27 Dec 2023
Cited by 1 | Viewed by 2460
Abstract
Design discussion is crucial in the architectural design process. To enhance the spatial understanding of 3D space and discussion effectiveness, recently, some systems have been proposed to support design discussion interactively in an immersive virtual environment. The entire design discussion can be archived [...] Read more.
Design discussion is crucial in the architectural design process. To enhance the spatial understanding of 3D space and discussion effectiveness, recently, some systems have been proposed to support design discussion interactively in an immersive virtual environment. The entire design discussion can be archived and potentially become course materials for future learners. In this paper, we propose an asynchronous VR exploration system that aims to help learners explore content effectively and efficiently anywhere and at any time. To improve effectiveness and efficiency, we also propose a summarization-to-detail approach with the application space by which students can observe the visualization of spatial summarization of actions and participants’ dwell time or the temporal distribution of dialogues and then locate the important or interesting region or dialogue for further exploration. To further explore the discussion content, students can call the preview to see the time-lapse animation of the object operation to understand the change in models or playback to view the discussion details. We conducted an exploratory user study with 10 participants to evaluate user experience, user impression, and effectiveness of learning the design discussion course content using our asynchronous VR design discussion content exploration system. The results indicate that the interactive VR exploration system presented can help learners study the design discussion content effectively. Participants also provided some positive feedback and confirmed the usefulness and value of the system presented. Our applications and lessons learned have implications for future asynchronous VR exploration systems, not only for architectural design discussion content, but also for other applications, such as industrial visual inspections and educational visualizations of design discussions. Full article
Show Figures

Figure 1

17 pages, 6588 KB  
Article
Autoencoder-Based Visual Anomaly Localization for Manufacturing Quality Control
by Devang Mehta and Noah Klarmann
Mach. Learn. Knowl. Extr. 2024, 6(1), 1-17; https://doi.org/10.3390/make6010001 - 21 Dec 2023
Cited by 22 | Viewed by 7011
Abstract
Manufacturing industries require the efficient and voluminous production of high-quality finished goods. In the context of Industry 4.0, visual anomaly detection poses an optimistic solution for automatically controlled product quality with high precision. In general, automation based on computer vision is a promising [...] Read more.
Manufacturing industries require the efficient and voluminous production of high-quality finished goods. In the context of Industry 4.0, visual anomaly detection poses an optimistic solution for automatically controlled product quality with high precision. In general, automation based on computer vision is a promising solution to prevent bottlenecks at the product quality checkpoint. We considered recent advancements in machine learning to improve visual defect localization, but challenges persist in obtaining a balanced feature set and database of the wide variety of defects occurring in the production line. Hence, this paper proposes a defect localizing autoencoder with unsupervised class selection by clustering with k-means the features extracted from a pretrained VGG16 network. Moreover, the selected classes of defects are augmented with natural wild textures to simulate artificial defects. The study demonstrates the effectiveness of the defect localizing autoencoder with unsupervised class selection for improving defect detection in manufacturing industries. The proposed methodology shows promising results with precise and accurate localization of quality defects on melamine-faced boards for the furniture industry. Incorporating artificial defects into the training data shows significant potential for practical implementation in real-world quality control scenarios. Full article
Show Figures

Figure 1

16 pages, 970 KB  
Article
Behind the Scenes: Impact of Virtual Backgrounds in Educational Videos on Visual Processing and Learning Outcomes
by Leen Catrysse, Andrienne Kerckhoffs and Halszka Jarodzka
J. Eye Mov. Res. 2023, 16(3), 1-16; https://doi.org/10.16910/jemr.16.3.4 - 19 Oct 2023
Cited by 7 | Viewed by 2358
Abstract
The increasing use of instructional videos in educational settings has emphasized the need for a deeper understanding of their design requirements. This study investigates the impact of virtual backgrounds in educational videos on students' visual information processing and learning outcomes. Participants aged 14–17 [...] Read more.
The increasing use of instructional videos in educational settings has emphasized the need for a deeper understanding of their design requirements. This study investigates the impact of virtual backgrounds in educational videos on students' visual information processing and learning outcomes. Participants aged 14–17 (N = 47) were randomly assigned to one of three conditions: a video with a neutral, authentic, or off-topic background. Their prior knowledge and working memory capacity (WMC) were measured before watching the video, and eye tracking data was collected during the viewing. Learning outcomes and student experiences were assessed after viewing. The eye tracking data revealed that a neutral background was the least distracting, allowing students to pay better attention to relevant parts of the video. Students found the off-topic background most distracting, but the negative effect on learning outcomes was not statistically significant. In contrast to expectations, no positive effect was observed for the authentic background. Furthermore, WMC had a significant impact on visual information processing and learning outcomes. These findings suggest that educators should consider using neutral backgrounds in educational videos, particularly for learners with lower WMC. Consequently, this research underscores the significance of careful design considerations in the creation of instructional videos. Full article
Show Figures

Figure 1

14 pages, 1548 KB  
Article
An Investigation of Feed-Forward and Feedback Eye Movement Training in Immersive Virtual Reality
by David J. Harris, Mark R. Wilson, Martin I. Jones, Toby de Burgh, Daisy Mundy, Tom Arthur, Mayowa Olonilua and Samuel J. Vine
J. Eye Mov. Res. 2022, 15(3), 1-14; https://doi.org/10.16910/jemr.15.3.7 - 12 Jun 2023
Cited by 3 | Viewed by 723
Abstract
The control of eye gaze is critical to the execution of many skills. The observation that task experts in many domains exhibit more efficient control of eye gaze than novices has led to the development of gaze training interventions that teach these behaviours. [...] Read more.
The control of eye gaze is critical to the execution of many skills. The observation that task experts in many domains exhibit more efficient control of eye gaze than novices has led to the development of gaze training interventions that teach these behaviours. We aimed to extend this literature by i) examining the relative benefits of feed-forward (observing an expert’s eye movements) versus feed-back (observing your own eye movements) training, and ii) automating this training within virtual reality. Serving personnel from the British Army and Royal Navy were randomised to either feed-forward or feed-back training within a virtual reality simulation of a room search and clearance task. Eye movement metrics – including visual search, saccade direction, and entropy – were recorded to quantify the efficiency of visual search behaviours. Feed-forward and feed-back eye movement training produced distinct learning benefits, but both accelerated the development of efficient gaze behaviours. However, we found no evidence that these more efficient search behaviours transferred to better decision making in the room clearance task. Our results suggest integrating eye movement training principles within virtual reality training simulations may be effective, but further work is needed to understand the learning mechanisms. Full article
Show Figures

Figure 1

18 pages, 1984 KB  
Article
Influence of Eye Movements on Academic Performance: A Bibliometric and Citation Network Analysis
by Adrián Salgado-Fernández, Ana Vázquez-Amor, Cristina Alvarez-Peregrin, Clara Martinez-Perez, Cesar Villa-Collar and Miguel Ángel Sánchez-Tena
J. Eye Mov. Res. 2022, 15(4), 1-18; https://doi.org/10.16910/jemr.15.4.4 - 7 Sep 2022
Cited by 7 | Viewed by 779
Abstract
Background: For many years it has been studied how eye movements influence reading and learning ability. The objective of this study is to determine the relationships between the different publications and authors. As well as to identify the different areas of research ocular [...] Read more.
Background: For many years it has been studied how eye movements influence reading and learning ability. The objective of this study is to determine the relationships between the different publications and authors. As well as to identify the different areas of research ocular movement.; Methods: Web of Science was the database for the search of publications for the period 1900 to May 2021, using the terms: “Eye movement" AND “Academic achiev*”. The analysis of the publication was performed using the CitNetExplorer, VOSviewer and CiteSpace software.; Results: 4391 publications and 11033 citation networks were found. The year with the most publications is 2018, a total of 318 publications and 10 citation networks. The most cited publication was "Saccade target selection and object recognition: evidence for a common attentional mechanism." published by Deubel et al. in 1999, with a citation index of 214. Using the Clustering function, nine groups were found that cover the main research areas in this field: neurological, age, perceptual attention, visual disturbances, sports, driving, sleep, vision therapy and academic performance.; Conclusion: Even being a multidisciplinary field of study, the topic with the most publications to date is the visual search procedure at the neurological level. Full article
Show Figures

Figure 1

11 pages, 464 KB  
Brief Report
Virtual Baby: 3D Model of the Anatomy and Physiology of Sucking and Swallowing in Infants as an Educational Tool
by Flávia Rebelo Puccini, Marina Gatti, Antônio de Castro Rodrigues, Silmara Rondon-Melo, Chao Lung Wen, Roberta Lopes de Castro Martinelli and Giédre Berretin-Felix
Int. J. Orofac. Myol. Myofunct. Ther. 2022, 48(1), 1-11; https://doi.org/10.52010/ijom.2022.48.1.4 - 15 Jul 2022
Cited by 3 | Viewed by 5211
Abstract
Objective: This project aimed to develop and update a dynamic three-dimensional (3D) graphic video learning object demonstrating a current knowledge of the anatomy and physiology of sucking and swallowing in newborns during breastfeeding. Method: To build and update the 3D computer [...] Read more.
Objective: This project aimed to develop and update a dynamic three-dimensional (3D) graphic video learning object demonstrating a current knowledge of the anatomy and physiology of sucking and swallowing in newborns during breastfeeding. Method: To build and update the 3D computer graphics iconographies of the “Virtual Baby”, we defined objectives for the learning object, created a literature review-based script, and organized a guide for structural (static) and functional (dynamic) graphical modeling for the designer. Results: Using 3D computer graphics, we produced a video with static images (anatomical structural) and dynamic sequences (most significant physiological and functional aspects and application of transparency to visualize the anatomical correlations between both). The video showed the anatomy and physiology of sucking and swallowing during breastfeeding. Its updates reflected additional scientific evidence as studies were published. Conclusion: Creation of the Virtual Baby provides a learning tool for visualizing the anatomy and physiology of sucking and swallowing in full-term newborns. The tool addresses the significant morphofunctional aspects of the breastfeeding process, supported by scientific literature, and can be used for student or professional training and in primary health care. Full article
Show Figures

Figure 1

13 pages, 3392 KB  
Article
Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications
by Felix S. Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt and Quentin Lohmeyer
J. Eye Mov. Res. 2021, 14(1), 1-13; https://doi.org/10.16910/jemr.14.1.5 - 19 May 2021
Cited by 8 | Viewed by 623
Abstract
Eye tracking (ET) has shown to reveal the wearer’s cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers’ use of the peripheral field of vision. [...] Read more.
Eye tracking (ET) has shown to reveal the wearer’s cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers’ use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments. Full article
Show Figures

Figure 1

11 pages, 2539 KB  
Article
Developing Expert Gaze Pattern in Laparoscopic Surgery Requires More than Behavioral Training
by Sicong Liu, Rachel Donaldson, Ashwin Subramaniam, Hannah Palmer, Cosette D. Champion, Morgan L. Cox and L. Gregory Appelbaum
J. Eye Mov. Res. 2021, 14(2), 1-11; https://doi.org/10.16910/jemr.14.2.2 - 10 Mar 2021
Cited by 11 | Viewed by 746
Abstract
Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were [...] Read more.
Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were assessed and five novices were trained and assessed in a 5-visit protocol on the Fundamentals of Laparoscopic Surgery peg transfer task. The task was adjusted to have a fixed action sequence to allow recordings of dwell durations based on pre-defined areas of interest (AOIs). Trained novices were shown to reach more than 98% (M = 98.62%, SD = 1.06%) of their behavioral learning plateaus, leading to equivalent behavioral performance to that of surgeons. Despite this equivalence in behavioral performance, surgeons continued to show significantly shorter dwell durations at visual targets of current actions and longer dwell durations at future steps in the action sequence than trained novices (ps ≤ .03, Cohen’s ds > 2). This study demonstrates that, while novices can train to match surgeons on behavioral performance, their gaze pattern is still less efficient than that of surgeons, motivating surgical training programs to involve eye tracking technology in their design and evaluation. Full article
Show Figures

Figure 1

19 pages, 999 KB  
Article
A Quantitative Analysis of the Taxonomy of Artistic Styles
by Viviane Clay, Johannes Schrumpf, Yannick Tessenow, Helmut Leder, Ulrich Ansorge and Peter König
J. Eye Mov. Res. 2020, 13(2), 1-19; https://doi.org/10.16910/jemr.13.2.5 - 20 Jun 2020
Cited by 7 | Viewed by 958
Abstract
Classifying artists and their work as distinct art styles has been an important task of scholars in the field of art history. Due to its subjectivity, scholars often contradict one another. Our project investigated differences in aesthetic qualities of seven art styles through [...] Read more.
Classifying artists and their work as distinct art styles has been an important task of scholars in the field of art history. Due to its subjectivity, scholars often contradict one another. Our project investigated differences in aesthetic qualities of seven art styles through quantitative means. This was achieved with state-of-the-art deep-learning paradigms to generate new images resembling the style of an artist or entire era. We conducted psychological experiments to measure the behavior of subjects when viewing these new art images. Two different experiments were used: In an eye-tracking study, subjects viewed art-stylespecific generated images. Eye movements were recorded and then compared between art styles. In a visual singleton search study, subjects had to locate a style-outlier image among three images of an alternative style. Reaction time and accuracy were measured and analyzed. These experiments show that there are measurable differences in behavior when viewing images of varying art styles. From these differences, we constructed hierarchical clusterings relating art styles based on the different behaviors of subjects viewing the samples. Our study reveals a novel perspective on the classification of artworks into stylistic eras and motivates future research in the domain of empirical aesthetics through quantitative means. Full article
Show Figures

Figure 1

2 pages, 39 KB  
Article
From Lab-Based Studies to Eye-Tracking in Virtual and Real Worlds: Conceptual and Methodological Problems and Solutions. Symposium 4 at the 20th European Conference on Eye Movement Research (Ecem) in Alicante, 20.8.2019
by Ignace T. C. Hooge, Roy S. Hessels, Diederick C. Niehorster, Gabriel J. Diaz, Andrew T. Duchowski and Jeff B. Pelz
J. Eye Mov. Res. 2019, 12(7), 1-2; https://doi.org/10.16910/jemr.12.7.8 - 25 Nov 2019
Cited by 7 | Viewed by 376
Abstract
Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers [...] Read more.
Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers was generally a highly technical affair. As such, mobile eye-tracking research was not feasible for most labs. Nowadays, many mobile eye trackers are available from eye-tracking manufacturers (e.g., Tobii, Pupil labs, SMI, Ergoneers) and various implementations in virtual/augmented reality have recently been released.The wide availability has caused the number of publications using a mobile eye tracker to increase quickly. Mobile eye tracking is now applied in vision science, educational science, developmental psychology, marketing research (using virtual and real supermarkets), clinical psychology, usability, architecture, medicine, and more. Yet, transitioning from lab-based studies where eye trackers are fixed to the world to studies where eye trackers are fixed to the head presents researchers with a number of problems. These problems range from the conceptual frameworks used in world-fixed and head-fixed eye tracking and how they relate to each other, to the lack of data quality comparisons and field tests of the different mobile eye trackers and how the gaze signal can be classified or mapped to the visual stimulus. Such problems need to be addressed in order to understand how world-fixed and head-fixed eye-tracking research can be compared and to understand the full potential and limits of what mobile eye-tracking can deliver. In this symposium, we bring together presenting researchers from five different institutions (Lund University, Utrecht University, Clemson University, Birkbeck University of London and Rochester Institute of Technology) addressing problems and innovative solutions across the entire breadth of mobile eye-tracking research. Hooge, presenting Hessels et al. paper, focus on the definitions of fixations and saccades held by researchers in the eyemovement field and argue how they need to be clarified in order to allow comparisons between world-fixed and head-fixed eye-tracking research.—Diaz et al. introduce machine-learning techniques for classifying the gaze signal in mobile eye-tracking contexts where head and body are unrestrained. Niehorster et al. compare data quality of mobile eye trackers during natural behavior and discuss the application range of these eye trackers. Duchowski et al. introduce a method for automatically mapping gaze to faces using computer vision techniques. Pelz et al. employ state-of-the-art techniques to map fixations to objects of interest in the scene video and align grasp and eye-movement data in the same reference frame to investigate the guidance of eye movements during manual interaction. Full article
22 pages, 26061 KB  
Article
Effects of Individuality, Education, and Image on Visual Attention: Analyzing Eye-Tracking Data using Machine Learning
by Sangwon Lee, Yongha Hwang, Yan Jin, Sihyeong Ahn and Jaewan Park
J. Eye Mov. Res. 2019, 12(2), 1-22; https://doi.org/10.16910/jemr.12.2.4 - 16 Jul 2019
Cited by 7 | Viewed by 408
Abstract
Machine learning, particularly classification algorithms, constructs mathematical models from labeled data that can predict labels for new data. Using its capability to identify distinguishing patterns among multi-dimensional data, we investigated the impact of three factors on the observation of architectural scenes: individuality, education, [...] Read more.
Machine learning, particularly classification algorithms, constructs mathematical models from labeled data that can predict labels for new data. Using its capability to identify distinguishing patterns among multi-dimensional data, we investigated the impact of three factors on the observation of architectural scenes: individuality, education, and image stimuli. An analysis of the eye-tracking data revealed that (1) a velocity histogram was unique to individuals, (2) students of architecture and other disciplines could be distinguished via endogenous parameters, but (3) they were more distinct in terms of seeking structural versus symbolic elements. Because of the reverse nature of the classification algorithms that automatically learn from data, we could identify relevant parameters and distinguishing eye-tracking patterns that have not been reported in previous studies. Full article
Show Figures

Figure 1

28 pages, 2610 KB  
Article
The Function of “Looking-at-Nothing” for Sequential Sensorimotor Tasks: Eye Movements to Remembered Action-Target Locations
by Rebecca M. Foerster
J. Eye Mov. Res. 2019, 12(2), 1-28; https://doi.org/10.16910/jemr.12.2.2 - 27 Jun 2019
Cited by 3 | Viewed by 337
Abstract
When performing manual actions, eye movements precede hand movements to target locations: Before we grasp an object, we look at it. Eye-hand guidance is even preserved when visual targets are unavailable, e.g., grasping behind an occlusion. This “looking-at-nothing” behavior might be functional, e.g., [...] Read more.
When performing manual actions, eye movements precede hand movements to target locations: Before we grasp an object, we look at it. Eye-hand guidance is even preserved when visual targets are unavailable, e.g., grasping behind an occlusion. This “looking-at-nothing” behavior might be functional, e.g., as “deictic pointer” for manual control or as memory-retrieval cue, or a by-product of automatization. Here, it is studied if looking at empty locations before acting on them is beneficial for sensorimotor performance. In five experiments, participants completed a click sequence on eight visual targets for 0–100 trials while they had either to fixate on the screen center or could move their eyes freely. During 50–100 consecutive trials, participants clicked the same sequence on a blank screen with free or fixed gaze. During both phases, participants looked at target locations when gaze shifts were allowed. With visual targets, target fixations led to faster, more precise clicking, fewer errors, and sparser cursor-paths than central fixation. Without visual information, a tiny free-gaze benefit could sometimes be observed and was rather a memory than a motor-calculation benefit. Interestingly, central fixation during learning forced early explicit encoding causing a strong benefit for acting on remembered targets later, independent of whether eyes moved then. Full article
Show Figures

Figure 1

20 pages, 4498 KB  
Article
Digital Sketch Maps and Eye Tracking Statistics as Instruments to Obtain Insights Into Spatial Cognition
by Merve Keskin, Kristien Ooms, Ahmet Ozgur Dogru and Philippe De Maeyer
J. Eye Mov. Res. 2018, 11(3), 1-20; https://doi.org/10.16910/jemr.11.3.4 - 15 Jun 2018
Cited by 18 | Viewed by 559
Abstract
This paper explores map users' cognitive processes in learning, acquiring and remembering information presented via screen maps. In this context, we conducted a mixed-methods user experiment employing digital sketch maps and eye tracking. On the one hand, the performance of the participants was [...] Read more.
This paper explores map users' cognitive processes in learning, acquiring and remembering information presented via screen maps. In this context, we conducted a mixed-methods user experiment employing digital sketch maps and eye tracking. On the one hand, the performance of the participants was assessed based on the order with which the objects were drawn and the influence of visual variables (e.g., presence & location, size, shape, color). On the other hand, trial durations and eye tracking statistics such as average duration of fixations, and number of fixations per seconds were compared. Moreover, selected AoIs (Area of Interests) were explored to gain a deeper insight on visual behavior of map users. Depending on the normality of the data, we used either two-way ANOVA or Mann-Whitney U test to inspect the significance of the results. Based on the evaluation of the drawing order, we observed that experts and males drew roads first whereas; novices and females focused more on hydrographic object. According to the assessment of drawn elements, no significant differences emerged between neither experts and novices, nor females and males for the retrieval of spatial information presented on 2D maps with a simple design and content. The differences in trial durations between novices and experts were not statistically significant while both studying and drawing. Similarly, no significant difference occurred between female and male participants for either studying or drawing. Eye tracking metrics also supported these findings. For average duration of fixation, there was found no significant difference between experts and novices, as well as between females and males. Similarly, no significant differences were found for the mean number of fixation. Full article
Show Figures

Figure 1

12 pages, 8214 KB  
Article
Visualizing the Reading Activity of People Learning to Read
by Oleg Špakov, Harri Siirtola, Howell Istance and Kari-Jouko Räihä
J. Eye Mov. Res. 2017, 10(5), 1-12; https://doi.org/10.16910/jemr.10.5.5 - 15 Nov 2017
Cited by 19 | Viewed by 503
Abstract
Several popular visualizations of gaze data, such as scanpaths and heatmaps, can be used independently of the viewing task. For a specific task, such as reading, more informative visualizations can be created. We have developed several such techniques, some dynamic and some static, [...] Read more.
Several popular visualizations of gaze data, such as scanpaths and heatmaps, can be used independently of the viewing task. For a specific task, such as reading, more informative visualizations can be created. We have developed several such techniques, some dynamic and some static, to communicate the reading activity of children to primary school teachers. The goal of the visualizations was to highlight the reading skills to a teacher with no background in the theory of eye movements or eye tracking technology. Evaluations of the techniques indicate that, as intended, they serve different purposes and were appreciated by the school teachers differently. Dynamic visualizations help to give the teachers a good understanding of how the individual students read. Static visualizations help in getting a simple overview of how the children read as a group and of their active vocabulary. Full article
Show Figures

Figure 1

Back to TopTop