Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = outdoor mobile eye-tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2527 KB  
Article
Advancing Mobile Neuroscience: A Novel Wearable Backpack for Multi-Sensor Research in Urban Environments
by João Amaro, Rafael Ramusga, Ana Bonifácio, André Almeida, João Frazão, Bruno F. Cruz, Andrew Erskine, Filipe Carvalho, Gonçalo Lopes, Ata Chokhachian, Daniele Santucci, Paulo Morgado and Bruno Miranda
Sensors 2025, 25(23), 7163; https://doi.org/10.3390/s25237163 - 24 Nov 2025
Viewed by 1288
Abstract
Rapid global urbanization has intensified the demand for sensing solutions that can capture the complex interactions between urban environments and their impact on human physical and mental health. Conventional laboratory-based approaches, while offering high experimental control, often lack ecological validity and fail to [...] Read more.
Rapid global urbanization has intensified the demand for sensing solutions that can capture the complex interactions between urban environments and their impact on human physical and mental health. Conventional laboratory-based approaches, while offering high experimental control, often lack ecological validity and fail to represent real-world exposures. To address this gap, we present the eMOTIONAL Cities Walker—a portable multimodal sensing platform designed as a wearable backpack unit developed for the synchronous collecting of multimodal data in either indoor or outdoor settings. The system integrates a suite of environmental sensors (covering microclimate, air pollution and acoustic monitoring) with physiological sensing technologies, including electroencephalography (EEG), mobile eye-tracking and wrist-based physiological monitoring. This configuration enables real-time acquisition of environmental and physiological signals in dynamic, naturalistic settings. Here, we describe the system’s technical architecture, sensor specifications, and field deployment across selected Lisbon locations, demonstrating its feasibility and robustness in urban environments. By bridging controlled laboratory paradigms with ecologically valid real-world sensing, this platform provides a novel tool to advance translational research at the intersection of sensor technology, human experience, and urban health. Full article
Show Figures

Figure 1

25 pages, 1278 KB  
Review
Eye-Tracking Advancements in Architecture: A Review of Recent Studies
by Mário Bruno Cruz, Francisco Rebelo and Jorge Cruz Pinto
Buildings 2025, 15(19), 3496; https://doi.org/10.3390/buildings15193496 - 28 Sep 2025
Viewed by 2429
Abstract
This Scoping Review (ScR) synthesizes advances in architectural eye-tracking (ET) research published between 2010 and 2024. Drawing on 75 peer-reviewed studies that met clear inclusion criteria, it monitors the field’s rapid expansion, from only 20 experiments before 2018, to more than 45 new [...] Read more.
This Scoping Review (ScR) synthesizes advances in architectural eye-tracking (ET) research published between 2010 and 2024. Drawing on 75 peer-reviewed studies that met clear inclusion criteria, it monitors the field’s rapid expansion, from only 20 experiments before 2018, to more than 45 new investigations in the three years thereafter, situating these developments within the longer historical evolution of ET hardware and analytical paradigms. The review maps 13 recurrent areas of application, focusing on design evaluation, wayfinding and spatial navigation, end-user experience, and architectural education. Across these domains, ET reliably reveals where occupants focus, for how long, and in what sequence, providing objective evidence that complements designer intuition and conventional post-occupancy surveys. Experts and novices might display distinct gaze signatures; for example, architects spend longer fixating on contextual and structural cues, whereas lay users dwell on decorative details, highlighting possible pedagogical opportunities. Despite these benefits, persistent challenges include data loss in dynamic or outdoor settings, calibration drift, single-user hardware constraints, and the need to triangulate gaze metrics with cognitive or affective measures. Future research directions emphasize integrating ET with virtual or augmented reality (VR) (AR) to validate design interactively, improving mobile tracking accuracy, and establishing shared datasets to enable replication and meta-analysis. Overall, the study demonstrates that ET is maturing into an indispensable, evidence-based lens for creating more intuitive, legible, and human-centered architecture. Full article
(This article belongs to the Special Issue Emerging Trends in Architecture, Urbanization, and Design)
Show Figures

Figure 1

22 pages, 44857 KB  
Article
Quantifying Dwell Time With Location-based Augmented Reality: Dynamic AOI Analysis on Mobile Eye Tracking Data With Vision Transformer
by Julien Mercier, Olivier Ertz and Erwan Bocher
J. Eye Mov. Res. 2024, 17(3), 1-22; https://doi.org/10.16910/jemr.17.3.3 - 29 Apr 2024
Cited by 10 | Viewed by 2599
Abstract
Mobile eye tracking captures egocentric vision and is well-suited for naturalistic studies. However, its data is noisy, especially when acquired outdoor with multiple participants over several sessions. Area of interest analysis on moving targets is difficult because (A) camera and objects move nonlinearly [...] Read more.
Mobile eye tracking captures egocentric vision and is well-suited for naturalistic studies. However, its data is noisy, especially when acquired outdoor with multiple participants over several sessions. Area of interest analysis on moving targets is difficult because (A) camera and objects move nonlinearly and may disappear/reappear from the scene; and (B) off-the-shelf analysis tools are limited to linearly moving objects. As a result, researchers resort to time-consuming manual annotation, which limits the use of mobile eye tracking in naturalistic studies. We introduce a method based on a fine-tuned Vision Transformer (ViT) model for classifying frames with overlaying gaze markers. After fine-tuning a model on a manually labelled training set made of 1.98% (=7845 frames) of our entire data for three epochs, our model reached 99.34% accuracy as evaluated on hold-out data. We used the method to quantify participants’ dwell time on a tablet during the outdoor user test of a mobile augmented reality application for biodiversity education. We discuss the benefits and limitations of our approach and its potential to be applied to other contexts. Full article
Show Figures

Figure 1

21 pages, 10099 KB  
Article
MYFix: Automated Fixation Annotation of Eye-Tracking Videos
by Negar Alinaghi, Samuel Hollendonner and Ioannis Giannopoulos
Sensors 2024, 24(9), 2666; https://doi.org/10.3390/s24092666 - 23 Apr 2024
Cited by 6 | Viewed by 3065
Abstract
In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and [...] Read more.
In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO’s extensive training on the MS COCO dataset for object detection and Mask2Former’s training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 ± 0.35 s, our approach stands as a robust solution for automatic fixation annotation. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

15 pages, 3253 KB  
Article
Usability Evaluation with Eye Tracking: The Case of a Mobile Augmented Reality Application with Historical Images for Urban Cultural Heritage
by Diana Szekely, Silviu Vert, Oana Rotaru and Diana Andone
Heritage 2023, 6(3), 3256-3270; https://doi.org/10.3390/heritage6030172 - 21 Mar 2023
Cited by 9 | Viewed by 6336
Abstract
Eye-tracking technologies have matured significantly in recent years and have become more affordable and easier to use. We investigated how eye-tracking technology can be applied to evaluate the usability of mobile augmented reality applications with historical images for urban cultural heritage. The experiment [...] Read more.
Eye-tracking technologies have matured significantly in recent years and have become more affordable and easier to use. We investigated how eye-tracking technology can be applied to evaluate the usability of mobile augmented reality applications with historical images for urban cultural heritage. The experiment involved a series of complex user evaluation sessions, combining semi-structured interviews, observations, think-aloud protocol, SUS questionnaire, and product reaction cards, complemented by eye tracking, to gather insights on the Spotlight Timisoara AR mobile application, part of a digital storytelling multiplatform for the city of Timisoara (Romania), soon to be European Capital of Culture in 2023. The results indicate strong and weak aspects of the application, both as expressed by the participants and as derived from analyzing the eye-tracking data. The paper also lists the main challenges we identified in using eye-tracking equipment to evaluate the usability of such mobile augmented reality applications for urban outdoor heritage. Full article
(This article belongs to the Special Issue Mixed Reality in Culture and Heritage)
Show Figures

Figure 1

25 pages, 21829 KB  
Article
BiodivAR: A Cartographic Authoring Tool for the Visualization of Geolocated Media in Augmented Reality
by Julien Mercier, Nicolas Chabloz, Gregory Dozot, Olivier Ertz, Erwan Bocher and Daniel Rappo
ISPRS Int. J. Geo-Inf. 2023, 12(2), 61; https://doi.org/10.3390/ijgi12020061 - 9 Feb 2023
Cited by 12 | Viewed by 4775
Abstract
Location-based augmented reality technology for real-world, outdoor experiences is rapidly gaining in popularity in a variety of fields such as engineering, education, and gaming. By anchoring medias to geographic coordinates, it is possible to design immersive experiences remotely, without necessitating an in-depth knowledge [...] Read more.
Location-based augmented reality technology for real-world, outdoor experiences is rapidly gaining in popularity in a variety of fields such as engineering, education, and gaming. By anchoring medias to geographic coordinates, it is possible to design immersive experiences remotely, without necessitating an in-depth knowledge of the context. However, the creation of such experiences typically requires complex programming tools that are beyond the reach of mainstream users. We introduce BiodivAR, a web cartographic tool for the authoring of location-based AR experiences. Developed using a user-centered design methodology and open-source interoperable web technologies, it is the second iteration of an effort that started in 2016. It is designed to meet needs defined through use cases co-designed with end users and enables the creation of custom geolocated points of interest. This approach enabled substantial progress over the previous iteration. Its reliance on geolocation data to anchor augmented objects relative to the user’s position poses a set of challenges: On mobile devices, GNSS accuracy typically lies between 1 m and 30 m. Due to its impact on the anchoring, this lack of accuracy can have deleterious effects on usability. We conducted a comparative user test using the application in combination with two different geolocation data types (GNSS versus RTK). While the test’s results are undergoing analysis, we hereby present a methodology for the assessment of our system’s usability based on the use of eye-tracking devices, geolocated traces and events, and usability questionnaires. Full article
(This article belongs to the Special Issue Cartography and Geomedia)
Show Figures

Figure 1

19 pages, 11413 KB  
Article
3D Gaze Estimation Using RGB-IR Cameras
by Moayad Mokatren, Tsvi Kuflik and Ilan Shimshoni
Sensors 2023, 23(1), 381; https://doi.org/10.3390/s23010381 - 29 Dec 2022
Cited by 11 | Viewed by 5100
Abstract
In this paper, we present a framework for 3D gaze estimation intended to identify the user’s focus of attention in a corneal imaging system. The framework uses a headset that consists of three cameras, a scene camera and two eye cameras: an IR [...] Read more.
In this paper, we present a framework for 3D gaze estimation intended to identify the user’s focus of attention in a corneal imaging system. The framework uses a headset that consists of three cameras, a scene camera and two eye cameras: an IR camera and an RGB camera. The IR camera is used to continuously and reliably track the pupil and the RGB camera is used to acquire corneal images of the same eye. Deep learning algorithms are trained to detect the pupil in IR and RGB images and to compute a per user 3D model of the eye in real time. Once the 3D model is built, the 3D gaze direction is computed starting from the eyeball center and passing through the pupil center to the outside world. This model can also be used to transform the pupil position detected in the IR image into its corresponding position in the RGB image and to detect the gaze direction in the corneal image. This technique circumvents the problem of pupil detection in RGB images, which is especially difficult and unreliable when the scene is reflected in the corneal images. In our approach, the auto-calibration process is transparent and unobtrusive. Users do not have to be instructed to look at specific objects to calibrate the eye tracker. They need only to act and gaze normally. The framework was evaluated in a user study in realistic settings and the results are promising. It achieved a very low 3D gaze error (2.12°) and very high accuracy in acquiring corneal images (intersection over union—IoU = 0.71). The framework may be used in a variety of real-world mobile scenarios (indoors, indoors near windows and outdoors) with high accuracy. Full article
(This article belongs to the Special Issue Computer Vision in Human Analysis: From Face and Body to Clothes)
Show Figures

Figure 1

25 pages, 11541 KB  
Article
CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors
by Tianyi Zhang, Abdallah El Ali, Chen Wang, Alan Hanjalic and Pablo Cesar
Sensors 2021, 21(1), 52; https://doi.org/10.3390/s21010052 - 24 Dec 2020
Cited by 52 | Viewed by 10178
Abstract
Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose [...] Read more.
Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance. Full article
(This article belongs to the Special Issue Sensor Based Multi-Modal Emotion Recognition)
Show Figures

Figure 1

14 pages, 646 KB  
Article
Comparing Written and Photo-Based Indoor Wayfinding Instructions Through Eye Fixation Measures and User Ratings as Mental Effort Assessments
by Laure De Cock, Pepijn Viaene, Kristien Ooms, Nico Van de Weghe, Ralph Michels, Alain De Wulf, Nina Vanhaeren and Philippe De Maeyer
J. Eye Mov. Res. 2019, 12(1), 1-14; https://doi.org/10.16910/jemr.12.1.1 - 9 Jan 2019
Cited by 11 | Viewed by 364
Abstract
The use of mobile pedestrian wayfinding applications is gaining importance indoors. However, compared to outdoors, much less research has been conducted with respect to the most adequate ways to convey indoor wayfinding information to a user. An explorative study was conducted to compare [...] Read more.
The use of mobile pedestrian wayfinding applications is gaining importance indoors. However, compared to outdoors, much less research has been conducted with respect to the most adequate ways to convey indoor wayfinding information to a user. An explorative study was conducted to compare two pedestrian indoor wayfinding applications, one text-based (SoleWay) and one image-based (Eyedog), in terms of mental effort. To do this, eye tracking data and mental effort ratings were collected from 29 participants during two routes in an indoor environment. The results show that both textual instructions and photographs can enable a navigator to find his/her way while experiencing no or very little cognitive effort or difficulties. However, these instructions must be in line with a user’s expectations of the route, which are based on his/her interpretation of the indoor environment at decision points. In this case, textual instructions offer the advantage that specific information can be explicitly and concisely shared with the user. Furthermore, the study drew attention to potential usability issues of the wayfinding aids (e.g., the incentive to swipe) and, as such, demonstrated the value of eye tracking and mental effort assessments in usability research. Full article
Show Figures

Figure 1

13 pages, 1625 KB  
Article
The Central Bias in Day-to-Day Viewing
by Flora Ioannidou, Frouke Hermens and Timothy L. Hodgson
J. Eye Mov. Res. 2016, 9(6), 1-13; https://doi.org/10.16910/jemr.9.6.6 (registering DOI) - 30 Sep 2016
Cited by 14 | Viewed by 785
Abstract
Eye tracking studies have suggested that, when viewing images centrally presented on a computer screen, observers tend to fixate the middle of the image. This so-called ‘central bias’ was later also observed in mobile eye tracking during outdoors navigation, where observers were found [...] Read more.
Eye tracking studies have suggested that, when viewing images centrally presented on a computer screen, observers tend to fixate the middle of the image. This so-called ‘central bias’ was later also observed in mobile eye tracking during outdoors navigation, where observers were found to fixate the middle of the head-centered video image. It is unclear, however, whether the extension of the central bias to mobile eye tracking in outdoors navigation may have been due to the relatively long viewing distances towards objects in this task and the constant turning of the body in the direction of motion, both of which may have reduced the need for large amplitude eye movements. To examine whether the central bias in day-to-day viewing is related to the viewing distances involved, we here compare eye movements in three tasks (indoors navigation, tea making, and card sorting), each associated with interactions with objects at different viewing distances. Analysis of gaze positions showed a central bias for all three tasks that was independent of the task performed. These results confirm earlier observations of the central bias in mobile eye tracking data, and suggest that differences in the typical viewing distance during different tasks have little effect on the bias. The results could have interesting technological applications, in which the bias is used to estimate the direction of gaze from head-centered video images, such as those obtained from wearable technology. Full article
Show Figures

Figure 1

19 pages, 20338 KB  
Article
Collecting and Analyzing Eye-Tracking Data in Outdoor Environments
by Karen M. Evans, Robert A. Jacobs, John A. Tarduno and Jeff B. Pelz
J. Eye Mov. Res. 2012, 5(2), 1-19; https://doi.org/10.16910/jemr.5.2.6 - 15 Dec 2012
Cited by 42 | Viewed by 543
Abstract
Natural outdoor conditions pose unique obstacles for researchers, above and beyond those inherent to all mobile eye-tracking research. During analyses of a large set of eye-tracking data collected on geologists examining outdoor scenes, we have found that the nature of calibration, pupil identification, [...] Read more.
Natural outdoor conditions pose unique obstacles for researchers, above and beyond those inherent to all mobile eye-tracking research. During analyses of a large set of eye-tracking data collected on geologists examining outdoor scenes, we have found that the nature of calibration, pupil identification, fixation detection, and gaze analysis all require procedures different from those typically used for indoor studies. Here, we discuss each of these challenges and present solutions, which together define a general method useful for investigations relying on outdoor eye-tracking data. We also discuss recommendations for improving the tools that are available, to further increase the accuracy and utility of outdoor eyetracking data. Full article
Show Figures

Figure 1

Back to TopTop