Next Article in Journal
Deep Learning to Unveil Correlations between Urban Landscape and Population Health
Previous Article in Journal
Estimating Exerted Hand Force via Force Myography to Interact with a Biaxial Stage in Real-Time by Learning Human Intentions: A Preliminary Investigation
Open AccessArticle

A Visual Attentive Model for Discovering Patterns in Eye-Tracking Data—A Proposal in Cultural Heritage

1
Dipartimento di Ingegneria Civile, Edile e dell’Architettura, Universitá Politecnica delle Marche, 60131 Ancona, Italy
2
Dipartimento di Ingegneria dell’Informazione, Universitá Politecnica delle Marche, 60131 Ancona, Italy
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(7), 2101; https://doi.org/10.3390/s20072101
Received: 24 February 2020 / Revised: 27 March 2020 / Accepted: 5 April 2020 / Published: 8 April 2020
(This article belongs to the Special Issue Smart Sensors Data Processing and Visualization in Cultural Heritage)
In the Cultural Heritage (CH) context, art galleries and museums employ technology devices to enhance and personalise the museum visit experience. However, the most challenging aspect is to determine what the visitor is interested in. In this work, a novel Visual Attentive Model (VAM) has been proposed that is learned from eye tracking data. In particular, eye-tracking data of adults and children observing five paintings with similar characteristics have been collected. The images are selected by CH experts and are—the three “Ideal Cities” (Urbino, Baltimore and Berlin), the Inlaid chest in the National Gallery of Marche and Wooden panel in the “Studiolo del Duca” with Marche view. These pictures have been recognized by experts as having analogous features thus providing coherent visual stimuli. Our proposed method combines a new coordinates representation from eye sequences by using Geometric Algebra with a deep learning model for automated recognition (to identify, differentiate, or authenticate individuals) of people by the attention focus of distinctive eye movement patterns. The experiments were conducted by comparing five Deep Convolutional Neural Networks (DCNNs), yield high accuracy (more than 80 %), demonstrating the effectiveness and suitability of the proposed approach in identifying adults and children as museums’ visitors. View Full-Text
Keywords: eye-tracking; Digital Cultural Heritage; Deep Convolutional Neural Networks eye-tracking; Digital Cultural Heritage; Deep Convolutional Neural Networks
Show Figures

Figure 1

MDPI and ACS Style

Pierdicca, R.; Paolanti, M.; Quattrini, R.; Mameli, M.; Frontoni, E. A Visual Attentive Model for Discovering Patterns in Eye-Tracking Data—A Proposal in Cultural Heritage. Sensors 2020, 20, 2101.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop