Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = object based map
Page = 2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 30882 KB  
Article
Intelligent Evaluation Method for Design Education and Comparison Research between visualizing Heat-Maps of Class Activation and Eye-Movement
by Jiayi Jia, Tianjiao Zhao, Junyu Yang and Qian Wang
J. Eye Mov. Res. 2024, 17(2), 1-34; https://doi.org/10.16910/jemr.17.2.1 - 10 Oct 2024
Cited by 1 | Viewed by 1731
Abstract
The evaluation of design results plays a crucial role in the development of design. This study presents a design work evaluation system for design education that assists design instructors in conducting objective evaluations. An automatic design evaluation model based on convolutional neural networks [...] Read more.
The evaluation of design results plays a crucial role in the development of design. This study presents a design work evaluation system for design education that assists design instructors in conducting objective evaluations. An automatic design evaluation model based on convolutional neural networks has been established, which enables intelligent evaluation of student design works. During the evaluation process, the CAM is obtained. Simultaneously, an eye-tracking experiment was designed to collect gaze data and generate eye-tracking heat maps. By comparing the heat maps with CAM, an attempt was made to explore the correlation between the focus of the evaluation’s attention on human design evaluation and the CNN intelligent evaluation. The experimental results indicate that there is some certain correlation between humans and CNN in terms of the key points they focus on when conducting an evaluation. However, there are significant differences in background observation. The research results demonstrate that the intelligent evaluation model of CNN can automatically evaluate product design works and effectively classify and predict design product images. The comparison shows a correlation between artificial intelligence and the subjective evaluation of human eyes in evaluation strategy. Introducing artificial intelligence into the field of design evaluation for education has a strong potential to promote the development of design education. Full article
Show Figures

Figure 1

14 pages, 19325 KB  
Article
Investigating Non-Visual Eye Movements Non-Intrusively: Comparing Manual and Automatic Annotation Styles
by Jeremias Stüber, Lina Junctorius and Annette Hohenberger
J. Eye Mov. Res. 2022, 15(2), 1-14; https://doi.org/10.16910/jemr.15.2.1 - 22 Apr 2022
Cited by 1 | Viewed by 358
Abstract
Non-visual eye-movements (NVEMs) are eye movements that do not serve the provision of visual information. As of yet, their cognitive origins and meaning remain under-explored in eye-movement research. The first problem presenting itself in pursuit of their study is one of annotation: in [...] Read more.
Non-visual eye-movements (NVEMs) are eye movements that do not serve the provision of visual information. As of yet, their cognitive origins and meaning remain under-explored in eye-movement research. The first problem presenting itself in pursuit of their study is one of annotation: in virtue of their being non-visual, they are not necessarily bound to a specific surface or object of interest, rendering conventional eye-trackers nonideal for their study. This, however, makes it potentially viable to investigate them without requiring high resolution data. In this report, we present two approaches to annotating NVEM data—one of them grid-based, involving manual annotation in ELAN (Max Planck Institute for Psycholinguistics: The Language Archive, 2019), the other one Cartesian coordinate-based, derived algorithmically through OpenFace (Baltrušaitis et al., 2018). We evaluated (a) the two approaches in themselves, e.g., in terms of consistency, as well as (b) their compatibility, i.e. the possibilities of mapping one to the other. In the case of (a), we found good overall consistency in both approaches, in the case of (b), there is evidence for the eventual possibility of mapping the OpenFace gaze estimations onto the manual coding grid. Full article
Show Figures

Figure 1

2 pages, 39 KB  
Article
From Lab-Based Studies to Eye-Tracking in Virtual and Real Worlds: Conceptual and Methodological Problems and Solutions. Symposium 4 at the 20th European Conference on Eye Movement Research (Ecem) in Alicante, 20.8.2019
by Ignace T. C. Hooge, Roy S. Hessels, Diederick C. Niehorster, Gabriel J. Diaz, Andrew T. Duchowski and Jeff B. Pelz
J. Eye Mov. Res. 2019, 12(7), 1-2; https://doi.org/10.16910/jemr.12.7.8 - 25 Nov 2019
Cited by 6 | Viewed by 361
Abstract
Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers [...] Read more.
Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers was generally a highly technical affair. As such, mobile eye-tracking research was not feasible for most labs. Nowadays, many mobile eye trackers are available from eye-tracking manufacturers (e.g., Tobii, Pupil labs, SMI, Ergoneers) and various implementations in virtual/augmented reality have recently been released.The wide availability has caused the number of publications using a mobile eye tracker to increase quickly. Mobile eye tracking is now applied in vision science, educational science, developmental psychology, marketing research (using virtual and real supermarkets), clinical psychology, usability, architecture, medicine, and more. Yet, transitioning from lab-based studies where eye trackers are fixed to the world to studies where eye trackers are fixed to the head presents researchers with a number of problems. These problems range from the conceptual frameworks used in world-fixed and head-fixed eye tracking and how they relate to each other, to the lack of data quality comparisons and field tests of the different mobile eye trackers and how the gaze signal can be classified or mapped to the visual stimulus. Such problems need to be addressed in order to understand how world-fixed and head-fixed eye-tracking research can be compared and to understand the full potential and limits of what mobile eye-tracking can deliver. In this symposium, we bring together presenting researchers from five different institutions (Lund University, Utrecht University, Clemson University, Birkbeck University of London and Rochester Institute of Technology) addressing problems and innovative solutions across the entire breadth of mobile eye-tracking research. Hooge, presenting Hessels et al. paper, focus on the definitions of fixations and saccades held by researchers in the eyemovement field and argue how they need to be clarified in order to allow comparisons between world-fixed and head-fixed eye-tracking research.—Diaz et al. introduce machine-learning techniques for classifying the gaze signal in mobile eye-tracking contexts where head and body are unrestrained. Niehorster et al. compare data quality of mobile eye trackers during natural behavior and discuss the application range of these eye trackers. Duchowski et al. introduce a method for automatically mapping gaze to faces using computer vision techniques. Pelz et al. employ state-of-the-art techniques to map fixations to objects of interest in the scene video and align grasp and eye-movement data in the same reference frame to investigate the guidance of eye movements during manual interaction. Full article
11 pages, 764 KB  
Article
Automating Areas of Interest Analysis in Mobile Eye Tracking Experiments Based on Machine Learning
by Julian Wolf, Stephan Hess, David Bachmann, Quentin Lohmeyer and Mirko Meboldt
J. Eye Mov. Res. 2018, 11(6), 1-11; https://doi.org/10.16910/jemr.11.6.6 - 10 Dec 2018
Cited by 42 | Viewed by 648
Abstract
For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assign-ment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies [...] Read more.
For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assign-ment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quan-titatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respective AOIs. cGOM extends state-of-the-art object detection and segmentation by mask R-CNN with a gaze mapping feature. The new algorithm’s performance is validated against a manual fixation-by-fixation mapping, which is considered as ground truth, in terms of true positive rate (TPR), true negative rate (TNR) and efficiency. Using only 72 training images with 264 labelled object representations, cGOM is able to reach a TPR of approx. 80% and a TNR of 85% compared to the manual mapping. The break-even point is reached at 2 h of eye tracking recording for the total procedure, respectively 1 h considering human working time only. Together with a real-time capability of the mapping process after completed train-ing, even hours of eye tracking recording can be evaluated efficiently. (Code and video examples have been made available at: https://gitlab.ethz.ch/pdz/cgom.git) Full article
Show Figures

Figure 1

20 pages, 4498 KB  
Article
Digital Sketch Maps and Eye Tracking Statistics as Instruments to Obtain Insights Into Spatial Cognition
by Merve Keskin, Kristien Ooms, Ahmet Ozgur Dogru and Philippe De Maeyer
J. Eye Mov. Res. 2018, 11(3), 1-20; https://doi.org/10.16910/jemr.11.3.4 - 15 Jun 2018
Cited by 17 | Viewed by 523
Abstract
This paper explores map users' cognitive processes in learning, acquiring and remembering information presented via screen maps. In this context, we conducted a mixed-methods user experiment employing digital sketch maps and eye tracking. On the one hand, the performance of the participants was [...] Read more.
This paper explores map users' cognitive processes in learning, acquiring and remembering information presented via screen maps. In this context, we conducted a mixed-methods user experiment employing digital sketch maps and eye tracking. On the one hand, the performance of the participants was assessed based on the order with which the objects were drawn and the influence of visual variables (e.g., presence & location, size, shape, color). On the other hand, trial durations and eye tracking statistics such as average duration of fixations, and number of fixations per seconds were compared. Moreover, selected AoIs (Area of Interests) were explored to gain a deeper insight on visual behavior of map users. Depending on the normality of the data, we used either two-way ANOVA or Mann-Whitney U test to inspect the significance of the results. Based on the evaluation of the drawing order, we observed that experts and males drew roads first whereas; novices and females focused more on hydrographic object. According to the assessment of drawn elements, no significant differences emerged between neither experts and novices, nor females and males for the retrieval of spatial information presented on 2D maps with a simple design and content. The differences in trial durations between novices and experts were not statistically significant while both studying and drawing. Similarly, no significant difference occurred between female and male participants for either studying or drawing. Eye tracking metrics also supported these findings. For average duration of fixation, there was found no significant difference between experts and novices, as well as between females and males. Similarly, no significant differences were found for the mean number of fixation. Full article
Show Figures

Figure 1

18 pages, 3030 KB  
Article
Globalland30 Mapping Capacity of Land Surface Water in Thessaly, Greece
by Ioannis Manakos, Konstantinos Chatzopoulos-Vouzoglanis, Zisis I. Petrou, Lachezar Filchev and Antonis Apostolakis
Land 2015, 4(1), 1-18; https://doi.org/10.3390/land4010001 - 23 Dec 2014
Cited by 22 | Viewed by 12501
Abstract
The National Geomatics Center of China (NGCC) produced Global Land Cover (GlobalLand30) maps with 30 m spatial resolution for the years 2000 and 2009–2010, responding to the need for harmonized, accurate, and high-resolution global land cover data. This study aims to assess the [...] Read more.
The National Geomatics Center of China (NGCC) produced Global Land Cover (GlobalLand30) maps with 30 m spatial resolution for the years 2000 and 2009–2010, responding to the need for harmonized, accurate, and high-resolution global land cover data. This study aims to assess the mapping accuracy of the land surface water layer of GlobalLand30 for 2009–2010. A representative Mediterranean region, situated in Greece, is considered as the case study area, with 2009 as the reference year. The assessment is realized through an object-based comparison of the GlobalLand30 water layer with the ground truth and visually interpreted data from the Hellenic Cadastre fine spatial resolution (0.5 m) orthophoto map layer. GlobCover 2009, GlobCorine 2009, and GLCNMO 2008 corresponding thematic layers are utilized to show and quantify the progress brought along with the increment of the spatial resolution, from 500 m to 300 m and finally to 30 m with the newly produced GlobalLand30 maps. GlobalLand30 detected land surface water areas show a 91.9% overlap with the reference data, while the coarser resolution products are restricted to lower accuracies. Validation is extended to the drainage network elements, i.e., rivers and streams, where GlobalLand30 outperforms the other global map products, as well. Full article
Show Figures

Graphical abstract

26 pages, 455 KB  
Article
A Bottom-Up Approach for Automatically Grouping Sensor Data Layers by their Observed Property
by Ben Knoechel, Chih-Yuan Huang and Steve H.L. Liang
ISPRS Int. J. Geo-Inf. 2013, 2(1), 1-26; https://doi.org/10.3390/ijgi2010001 - 30 Jan 2013
Cited by 3 | Viewed by 6789
Abstract
The Sensor Web is a growing phenomenon where an increasing number of sensors are collecting data in the physical world, to be made available over the Internet. To help realize the Sensor Web, the Open Geospatial Consortium (OGC) has developed open standards to [...] Read more.
The Sensor Web is a growing phenomenon where an increasing number of sensors are collecting data in the physical world, to be made available over the Internet. To help realize the Sensor Web, the Open Geospatial Consortium (OGC) has developed open standards to standardize the communication protocols for sharing sensor data. Spatial Data Infrastructures (SDIs) are systems that have been developed to access, process, and visualize geospatial data from heterogeneous sources, and SDIs can be designed specifically for the Sensor Web. However, there are problems with interoperability associated with a lack of standardized naming, even with data collected using the same open standard. The objective of this research is to automatically group similar sensor data layers. We propose a methodology to automatically group similar sensor data layers based on the phenomenon they measure. Our methodology is based on a unique bottom-up approach that uses text processing, approximate string matching, and semantic string matching of data layers. We use WordNet as a lexical database to compute word pair similarities and derive a set-based dissimilarity function using those scores. Two approaches are taken to group data layers: mapping is defined between all the data layers, and clustering is performed to group similar data layers. We evaluate the results of our methodology. Full article
Show Figures

11 pages, 203 KB  
Article
Objective Measures of Emotion During Virtual Walks through Urban Environments
by Moritz Geiser and Peter Walla
Appl. Sci. 2011, 1(1), 1-11; https://doi.org/10.3390/app1010001 - 1 Jul 2011
Cited by 42 | Viewed by 12159
Abstract
Previous studies were able to demonstrate different verbally stated affective responses to environments. In the present study we used objective measures of emotion. We examined startle reflex modulation as well as changes in heart rate and skin conductance while subjects virtually walked through [...] Read more.
Previous studies were able to demonstrate different verbally stated affective responses to environments. In the present study we used objective measures of emotion. We examined startle reflex modulation as well as changes in heart rate and skin conductance while subjects virtually walked through six different areas of urban Paris using the StreetView tool of Google maps. Unknown to the subjects, these areas were selected based on their median real estate prices. First, we found that price highly correlated with subjective rating of pleasantness. In addition, relative startle amplitude differed significantly between the area with lowest versus highest median real estate price while no differences in heart rate and skin conductance were found across conditions. We conclude that interaction with environmental scenes does elicit emotional responses which can be objectively measured and quantified. Environments activate motivational and emotional brain circuits, which is in line with the notion of an evolutionary developed system of environmental preference. Results are discussed in the frame of environmental psychology and aesthetics. Full article
Show Figures

Back to TopTop