Next Article in Journal
Mapping Lithologic Components of Ophiolitic Mélanges Based on ASTER Spectral Analysis: A Case Study from the Bangong-Nujiang Suture Zone (Tibet, China)
Next Article in Special Issue
Survey on Urban Warfare Augmented Reality
Previous Article in Journal / Special Issue
Traffic Command Gesture Recognition for Virtual Urban Scenes Based on a Spatiotemporal Convolution Neural Network
Article Menu
Issue 1 (January) cover image

Export Article

Open AccessArticle
ISPRS Int. J. Geo-Inf. 2018, 7(1), 36; doi:10.3390/ijgi7010036

Framework for Virtual Cognitive Experiment in Virtual Geographic Environments

1,2
,
2,3
,
2,3
,
2,3,4
and
5,*
1
Institute of Remote Sensing and Geographical Information System, Peking University, Beijing 100871, China
2
Institute of Space and Earth Information Science, the Chinese University of Hong Kong, Hong Kong, China
3
Shenzhen Research Institute, the Chinese University of Hong Kong, Shenzhen 518057, China
4
Department of Geography and Resource Management, the Chinese University of Hong Kong, Hong Kong, China
5
Key Laborarory of Poyang Lake Wetland and Watershed Reasearch Ministry of Education, Jiangxi Normal University, Nanchang 330022, China
*
Author to whom correspondence should be addressed.
Received: 31 October 2017 / Revised: 29 December 2017 / Accepted: 17 January 2018 / Published: 22 January 2018
View Full-Text   |   Download PDF [3473 KB, uploaded 22 January 2018]   |  

Abstract

Virtual Geographic Environment Cognition is the attempt to understand the human cognition of surface features, geographic processes, and human behaviour, as well as their relationships in the real world. From the perspective of human cognition behaviour analysis and simulation, previous work in Virtual Geographic Environments (VGEs) has focused mostly on representing and simulating the real world to create an ‘interpretive’ virtual world and improve an individual’s active cognition. In terms of reactive cognition, building a user ‘evaluative’ environment in a complex virtual experiment is a necessary yet challenging task. This paper discusses the outlook of VGEs and proposes a framework for virtual cognitive experiments. The framework not only employs immersive virtual environment technology to create a realistic virtual world but also involves a responsive mechanism to record the user’s cognitive activities during the experiment. Based on the framework, this paper presents two potential implementation methods: first, training a deep learning model with several hundred thousand street view images scored by online volunteers, with further analysis of which visual factors produce a sense of safety for the individual, and second, creating an immersive virtual environment and Electroencephalogram (EEG)-based experimental paradigm to both record and analyse the brain activity of a user and explore what type of virtual environment is more suitable and comfortable. Finally, we present some preliminary findings based on the first method. View Full-Text
Keywords: virtual geographic environments; spatial cognition; brain-computer interface; street-level imagery; deep learning virtual geographic environments; spatial cognition; brain-computer interface; street-level imagery; deep learning
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Zhang, F.; Hu, M.; Che, W.; Lin, H.; Fang, C. Framework for Virtual Cognitive Experiment in Virtual Geographic Environments. ISPRS Int. J. Geo-Inf. 2018, 7, 36.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
ISPRS Int. J. Geo-Inf. EISSN 2220-9964 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top