Next Article in Journal
Toward Vulnerability Detection for Ethereum Smart Contracts Using Graph-Matching Network
Previous Article in Journal
A Machine Learning Predictive Model to Detect Water Quality and Pollution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Design of a Mixed-Reality 3D Minimap to Enhance Pedestrian Satisfaction in Urban Exploratory Navigation

Department of Computer Science and Engineering, Waseda University, Tokyo 169-8555, Japan
*
Authors to whom correspondence should be addressed.
Future Internet 2022, 14(11), 325; https://doi.org/10.3390/fi14110325
Submission received: 24 October 2022 / Revised: 6 November 2022 / Accepted: 8 November 2022 / Published: 10 November 2022
(This article belongs to the Special Issue Extended Reality (XR) over Wireless Networks)

Abstract

:
The development of ubiquitous computing technology and the emergence of XR could provide pedestrian navigation with more options for user interfaces and interactions. In this work, we aim investigate the role of a mixed-reality map interface in urban exploration to enhance pedestrians’ mental satisfaction. We propose a mixed-reality 3D minimap as a part of the navigation interface which pedestrians could refer to and interact during urban exploration. To further explore the different levels of detail of the map interface, we conducted a user study (n = 28, two groups with two tasks). We designed two exploratory activities as experimental tasks with two map modes (a normal one and a simplified one) to discuss the detailed design of the minimap interface. The results indicated that participants showed a positive attitude toward our method. The simplified map mode could result in a lower perceived workload in both tasks while enhancing performance in specific navigation, such as wayfinding. However, we also found that pedestrians’ preference for the level of detail of the minimap interface is dynamic in navigation. Thus, we suggest discussing the different levels of detail further in specific scenarios. Finally, we also summarize some findings observed during user study for inspiring the study of virtual map interface of future mixed-reality navigation for urban exploration in various scenarios.

1. Introduction

Pedestrian navigation we use in daily life could provide accurate navigation instructions and support various exploratory activities. Well-designed navigation could encourage people to go outside for outdoor exploration [1] and help the local tourism industry and commerce flourish [2,3,4,5]. Nowadays, research on pedestrian navigation has become increasingly popular in different disciplines. With the development of technology, the user interface of pedestrian navigation is constantly evolving [6]. In particular, immersive technology such as augmented and mixed reality can display virtual navigation information on a real-world interface [7,8,9,10]. These technologies have constantly changed how pedestrians learn and interact with location-based information, especially in urban exploratory activities. For instance, some studies and applications have applied augmented reality location-based games and exploratory navigation for entertainment, tourism, and urban exploration [11,12,13,14,15,16,17] to enhance user experience.
However, though mobile augmented and mixed-reality technologies are gradually coming out of the laboratory and into our lives, they have not yet been embedded in practical applications. Moreover, as people have become accustomed to accessing information via mobile devices from the Internet, the interactions between humans and their digital machines have become more diverse and complicated. Therefore, the user-interface design and navigation system’s interaction might affect pedestrians’ experience from different layers. Some research has focused on technology to resolve the challenges in a physical sense, and physiological safety, such as GPS technology, route planning method, and navigation instruction visualization; in contrast, the study of mental satisfaction during navigation is not rich [18]. For example, an increasing number of studies on human–computer interaction (HCI) have started to pay attention to the fundamental usability and preferences of user-interface designs for augmented and mixed-reality navigation [19,20,21,22,23,24]. However, in [6], Cartwright et al. suggested that mobile maps not only need to be efficient but also should be affective and emotionally pleasing to produce a positive user experience.
In recent years, research on exploratory navigation has arisen [14,16,25,26,27]. These studies and technologies are applied in entertainment, tourism, and other location-based services, inspiring HCI research on navigation systems. Compared with the conventional interfaces of navigation applications, augmented reality navigation can reduce the cognitive deviation between the 2D map interface and the locations in the physical world. It can also display point-of-interest information as virtual marks for pedestrians to find [28,29,30]. These advantages assist pedestrians not only in orienting and navigating, but also by helping them to explore the surrounding points of interest. However, although augmented technology provides a good navigation guidance visualization experience, some challenges remain. For instance, sometimes, there is an inability to identify off-screen POIs accurately, so there is a failure to grasp the holistic spatial information. In addition, the improper visualization of the overloaded point-of-interest information might also damage the experience and satisfaction of pedestrians in exploratory navigation. Therefore, HCI researchers must investigate how to comprehensively enhance pedestrian mental satisfaction through a suitable and comfortable navigation interface.
Based on the above challenges, in this work, we propose a mixed-reality 3D minimap as a part of the navigation interface through the mobile head-mounted display to support urban exploratory navigation. We hope the proposed map interface will provide pedestrians with a holistic view of spatial and environmental information and increase interactivity with the virtual contents rather than just displaying the virtual marks through hand-held augmented reality navigation. We devoted ourselves to exploring the role of the virtual 3D map in future mixed-reality navigation, which aims to enhance pedestrians’ mental satisfaction in urban navigation. To further discuss the effect of the level of detail of the minimap interface on pedestrian satisfaction, we designed two levels of detail for map modes: a normal one with complete spatial and environmental information and a simplified one that filters the irrelevant information.
According to the literature review of related works, we found that mental satisfaction in pedestrian navigation is affected by various attributes and should be measured comprehensively. In this work, we utilized three factors to measure the mental satisfaction of pedestrians in exploratory navigation: performance in exploratory navigation, NASA-TLX workload, and user experience. Thus, we investigated which map mode we designed is most suitable and can enhance pedestrians’ mental satisfaction. To better answer this question, we summarize the research questions as follows:
  • RQ1: Which map mode can result in a better performance in exploratory navigation?
  • RQ2: Which map mode can result in a lower perceived workload in exploratory navigation?
  • RQ3: Which map mode can result in a better user experience in exploratory navigation?
The contributions of this work can be summarized as follows:
(1) Based on the related works, we proposed a 3D minimap interface for mobile mixed-reality navigation to assist with exploratory navigation. We designed two map modes for the experiment to discuss the suitable levels of detail of the interface. We found that participants showed a positive attitude toward our map interface. The simplified map mode could result in a lower perceived workload in exploratory navigation and enhance performance in the specific navigation task. However, we also found that because of the complexity of pedestrians’ motivations and needs, the preference for level of detail is dynamic in exploratory navigation.
(2) Based on our study, we suggested that researchers in this area should investigate the different levels of detail of the map interface in more specific scenarios. In addition, the other factors affecting mental satisfaction should be explored and studied more comprehensively. We also summarized some findings for the future map interface of mobile mixed-reality navigation in urban exploration. In brief, we suggest further discussing the miniature size, levels of spatial information detail, and map view of the 3D minimap interface in the future. Furthermore, further research should also focus on the detailed interaction between pedestrian and virtual information presented by the map interface.
The organization of this article can be described as follows: First, we introduce the background and intentions of this work. Then, we summarize two aspects of related work in this field to find the challenges and opportunities for study in the present and future. Then, we will describe our method and introduce the experiment. In the following section, we present the quantitative and qualitative data as results from the user study and discuss the results, limitations, and future work. Finally, we summarize this work.

2. Related Work

2.1. Mental Satisfaction in Pedestrian Navigation

Research on pedestrian navigation has a long history. Pedestrians always rely on map services to acquire information about their surroundings. In [18], Fang et al. categorized pedestrian needs into three layers based on the perspective of Maslow’s theory [31,32,33]: physical sense, physiological safety, and mental satisfaction. Some studies contributed to the physical sense ability, such as the visual display method of navigation, and some works focused on accurate route planning and security to provide reliable navigation from the perspective of the physiological safety layer. However, the study of mental satisfaction has been ignored in some cases. Some research focused on satisfaction during navigation; for instance, there are studies utilizing NASA-TLX [34] and user experience questionnaires [35] to measure perceived workload in pedestrian navigation tasks [36,37,38].
However, pedestrian navigation is a complex process, beyond the effect of environmental factors, the pedestrian is affected by many aspects of the navigation system. Among these aspects, the human–machine interface of navigation is one of the necessary aspects that should never be ignored [18]. From silent paper maps to mobile digital maps, the navigation interface influences user behavior in practical cases. Therefore, to enhance pedestrian satisfaction, researchers should also pay attention to user interface design in navigation. Mental satisfaction might be affected by the map interface in wayfinding tasks. For example, in some cases, the navigation performance of pedestrians will be affected by failure to reach a location and missing some points of interest. Some studies on spatial knowledge and information acquisition [39,40,41,42] emphasize the importance of mobile map design. In [43], Montello, D.R. summarized that the spatial knowledge of places develops in a sequence of three stages or elements: landmark knowledge, route knowledge, and survey knowledge. Survey knowledge means knowledge of two-dimensional layouts and includes the simultaneous interrelations of locations. Spatial knowledge during navigation could help pedestrians establish a cognitive map to perform orientation and navigation confidently.
Allen [40,44] categorized wayfinding tasks into three general types based on their purposes: travel between two familiar places, which is labeled a commute; travel into unfamiliar territory to learn about the surrounding environment, which is labeled as exploration; and travel from a familiar place of origin to an unfamiliar destination, which is labeled a quest. He suggested that different types of wayfinding tasks should be distinguished from each other, and that the means used to accomplish wayfinding tasks should be analyzed in terms of their constituent cognitive abilities. In urban exploration, all three wayfinding tasks involve different motivations and needs. For example, if a person lives in an urban area, when he or she wants to explore a familiar or unfamiliar street in that urban area, he or she might perform all three navigation tasks mentioned above. However, mainstream navigation systems usually focus on providing the shortest route for pedestrians from one place to another place. Accurate instructions of this kind displayed on the navigation interface might occupy the attention of pedestrians so that they cannot form a comprehensive understanding of the real world, which might damage the user experience as well. Moreover, sometimes pedestrians do not need accurate route planning and navigation, which may lead to losing enjoyment and even missing some serendipity along the way, and they may view a rough direction with roaming and detours as their preference. Therefore, sometimes it is necessary to provide pedestrians with a holistic view of the spatial distribution of an urban exploration area and respectfully provide a dynamic map interface to assist pedestrians to explore more points of interest.

2.2. Novel Interface to Support Exploratory Navigation

As an interface between digital information and the physical world, the digital map always plays an important part in navigation systems and should be designed to be more adaptive to pedestrian needs. The information the map provides might shape and affect relationships between places and people and give pedestrians inspiration for searching and browsing on the way. Conventional navigation systems, such as Google Maps [45] and Apple Maps [46], have benefited daily navigation with efficient orientation and location. However, the main feature of these mainstream navigation applications is providing people with precise turn-by-turn navigation, but the exploratory feature is not widely designed and integrated. Vaittinen and McGookin [14] proposed a radar navigation interface to visualize the points of interest and provide a general exploratory directional navigation for urban exploration. Sasaki and Yamamoto [13] proposed location-based augmented reality and object-recognition augmented reality to enable users to efficiently obtain directions to sightseeing spots and nearby facilities within urban tourist areas and sightseeing spot information. Some studies have also contributed to designing interesting location-based game mechanisms to engage in exploratory navigation in urban environments [11,17].
Augmented reality can enhance people’s perception of the real world by displaying additional virtual information. Recently, the studies on augmented reality navigation could be summarized into car navigation, pedestrian navigation, and indoor navigation according to the scenarios. Most studies on pedestrian augmented reality navigation are focusing more on the visualization of point-of-interest information and turn-by-turn navigation instructions of route planning on the hand-held devices [22,47]. In these studies, the map interface usually performs as a split part of the screen to assist recognition of virtual marks of point of interest and digital route instruction. Moreover, the map interface design in these studies is similar to the conventional map view of the navigation system. With the development of technology on GIS data and modeling, the 3D digital map plays an important role in navigation, geographic data visualization, urban management, location-based service, etc. Some studies utilized mixed reality to augment the real world via virtual 3D content such as 3D architectures and buildings for supporting cultural heritage sightseeing [48,49]. Differently from just overlaying the virtual labels and signs on the real world through augmented reality, mixed reality is a broader concept that can combine the virtual objects and the real world and make the virtual elements become anchored to specific locations in the physical world to make the interactions between human and virtual content more immersive. With the development of technology in the context of the smart city environment, the mobile mixed reality through head-mounted display might go outside and be applied in pedestrian exploratory navigation to provide a more immersive and interesting experience. However, because the user interface in a head-mounted display is far different than in 2D mobile devices, the design of the map interface needs to be explored and discussed in detail. For instance, what role does the map interface play in mixed-reality navigation, and how we should integrate the map interface into mixed-reality navigation outdoors?
To our knowledge, the study on virtual 3D map interface design in mixed-reality navigation is not sufficient in the literature. In [50], the authors proposed the concept of the hybrid map in the virtual reality environment and discussed the interaction method in the preliminary study, but the detailed design of such a map interface has not been discussed a lot. In [51], researchers focused on hybrid mobile tools to visualize high-quality and complex 3D geometry in the real environment, and the design of a 3D map interface was not discussed.
Thus, based on the above investigation of related works, we explored 3D map interface design in mixed-reality navigation for urban exploration to enhance mental satisfaction.

3. Mixed-Reality 3D Minimap

In this section, we introduce the design concept of the 3D minimap interface we proposed and describe the implementation. The proposed map interface aims to assist pedestrians in exploring a specific destination area in an urban environment. It was designed as an interactable virtual 3D minimap (like a virtual 3D sandbox in one’s hands). Like the paper map held in hand, pedestrians can hold the virtual 3D minimap, which is displayed by a mixed-reality head-mounted display, and manipulate it through two hands to locate objects, learn point-of-interest information, explore, and navigate at any time. In Figure 1, we illustrate different map interfaces in different devices: paper maps (Figure 1A), digital maps in smartphones (Figure 1B), and mixed-reality 3D minimap (Figure 1C) designs. The mixed-reality 3D minimap could be considered to combine the advantages of paper maps and digital maps in smartphones.

3.1. Interface Component

Our method is inspired by the minimap mechanism in video games which aims to help players orientate and locate themselves and grasp their environment and targets efficiently and comprehensively. There are studies on the minimap being applied in navigation [52,53,54,55] to help players to walk and explore. Thus, we expect the design of the minimap mechanism could also be applied in the future exploratory navigation system. Unlike traditional 2D orthographic minimaps in video games, our mixed-reality 3D minimap presents 3D spatial and environmental information in a bird’s-eye view, and pedestrians can manipulate the map by zooming, rotating, and transforming.
As shown in Figure 2, the mixed-reality 3D minimap consists of three layers: a base map layer, a feature layer, and a layer of controls. In the base map layer, the spatial and environmental information of geographic features, such as continents, lakes, administrative boundaries, streets, cities, and place names, are presented to depict the essential components of a map base. In the feature layer, the detailed information of a grouping of similar geographic features exists, such as point-of-interest of landmarks, buildings, and roads. Finally, the control layer includes the interaction between pedestrian and map layers.
In the rest of this section, we introduce the user interface components.

3.1.1. Point of Interest and Landmark

The points of interest are marked in the feature layer of the minimap interface to support exploration. Like the conventional map, which is applied daily, we set a series of virtual marks to illustrate the points of interest and landmarks in a 3D minimap interface. Three kinds of point-of-interest labels were designed to represent restaurants and foods, attractions and landmarks, and stores for shopping. The pedestrian could refer to the point-of-interest information in the map interface to decide if they would like to arrive and explore the point of interest in which they might be interested. Figure 3 shows the visualization of points of interest in the 3D minimap demo used in the user study.

3.1.2. Navigation

In this work, we hope pedestrians could only refer to 3D minimap to navigate without route instructions on some occasions. We summarized the different needs during exploratory navigation and found that pedestrians may not always need precise navigation, and sometimes they just follow a rough direction according to the map. Therefore, we hope to explore whether the 3D minimap could help pedestrians recognize the spatial distribution and navigate from one original point to a destination point without route turn-by-turn instruction to avoid being over-controlled by the navigation during exploration.
In our method, the navigation in feature layers is split into two mechanisms. With the first navigation mechanism, we hope pedestrians can explore by referring to the 3D minimap with the self-location component, like using a digital “Where I am map” to orientate and navigate. On the other hand, for the second mechanism, the pedestrian could follow route instructions displayed as a series of virtual arrows from the origin to the destination point. Figure 4 shows navigation features in the map interface.

3.1.3. Layer of Control

In an ideal situation, the virtual 3D minimap is projected via a head-mounted display, and it could be operated with two hands of a pedestrian, like using a virtual sandbox with hands, which is different from the interactions with a conventional map interface on a mobile device. The control mechanism in this work focuses on the zoom, transform, and rotate to change the map size range, location, and direction to support exploration. In detail, the pedestrian could refer to the map while walking and exploring. The map was designed to transform along with the movement of the pedestrian’s location.

3.2. Different Levels of Detail of Map Interface

To explore and discuss the impacts that result from the different levels of detail of the map interface, we further propose both the normal and the simplified 3D minimap modes. We aimed to use different detailed map modes to investigate the suitable interface in exploratory navigation. Firstly, we conducted a preliminary study as the first phase of this work. Then, based on the literature and the preliminary interview results, we designed the simplified map mode in detail to further explore the impacts of different levels-of-detail maps.

3.2.1. Preliminary Study

To investigate the pedestrian’s needs for the map interfaces with different levels of detail, we first conducted a field study with 5 participants (4 females and 1 male, average age = 27.2, SD = 2.54) as a preliminary study in a real street block of the urban environment.
We designed a free exploration task with augmented reality navigation in an assigned urban area. A series of landmarks were marked on the map to assist pedestrians with searching and browsing in the surrounding area. Like the Google Maps we used daily, we designed the default map interface with 2D and 3D default map modes and the 3D aerial map mode generated by global satellites. For the augmented reality function, participants could use augmented reality features to find the point of interest following the virtual anchor in the augmented reality interface corresponding to GPS information. A map view window with three different maps was added as a part of the interface to assist participants in exploration. The participant could switch among three map mode interfaces according to their preferences and need. The system prototype was developed by Xcode [56], and the map and augmented reality features are supported by MapKit [57] and ARKit [58]. The application used in the experiment was installed on an iPhone 7 with the iOS13.5 system.
From the interview results, we found that the participants showed different preferences for the three map modes during the exploration task according to their different needs. Some participants mentioned that they preferred using the 3D aerial map to identify the appearances of landmarks, points of interest, and other buildings, which could help them orientate and locate in rough directions instead of using explicit navigation. However, some participants also mentioned some deficiencies when referring to the 3D aerial map in some cases. For instance, sometimes they do not need such detailed, complicated information. Moreover, some participants said that the point-of-interest information was sometimes overloaded in the augmented reality interface, which affected the navigation experience. For instance, some participants mentioned that they prefer more valuable points of interest for exploration rather than presenting all the points of interest on the map interface. Based on the preliminary study results, we can learn that when pedestrians walk and explore in an urban area, they prefer following the landmarks with impressed appearances to navigate in a rough direction.

3.2.2. Two Different Levels of Detail Map Design

Based on the preliminary study, in Figure 5, we illustrate the designs of different levels of detail in the normal 3D map mode and the simplified 3D map mode. As the figure shows, we simplified the map interface from the base map and feature layers to reduce the complexity, which results from the over-realistic information from the normal 3D map. The normal map consists of all the components of the base map layer and feature layers, which could be considered a mirrored map of the real world. However, as for the simplified one, we simplify the complicated spatial and environmental information in the base map layer. As for the feature layer in simplified map mode, the valuable points of interest, for instance, restaurants and foods, attractions and landmarks, and stores for shopping, are retained, and the private buildings, points of interest not for visiting and exploration, are filtered out. It is noted that the filtered information in simplified map mode is represented in 2D map mode. We expect that the simplified 3D minimap can emphasize vital information, which can assist pedestrians in exploring and navigating.
In the experiment section later, we introduce the design of these two modes during exploratory navigation to discuss the suitable level of detail of the 3D minimap for future usage.

4. Experiment

4.1. Experimental Design

We aimed to utilize virtual reality to simulate outdoor mixed-reality navigation through mobile head-mounted displays, such as HoloLens [59], in real-world scenarios. To better control irrelevant variables in the experiment and to face the risk of outdoor activity resulting from COVID-19, in this work, we planned to use a virtual reality environment to simulate a mixed-reality scenario to conduct a user study. Moreover, the research on mixed-reality 3D minimap design for exploratory navigation is not rich in the literature, and we expect this study will become an inspiration for further study as well. The design of 3D minimap in this work is still in the preliminary stage, so we could make use of the advantages of virtual reality, such as lower cost, variation control, and easier manipulation to conduct the user study for mixed-reality applications, and we conducted a field study in the real world after studies in simulation. Utilizing the virtual reality method to simulate augmented reality and mixed-reality study is common in research [60,61,62,63]. The advantage of this method is reducing the cost of outdoor field experiments, and since it is easier to recognize the problems that the design should overcome during testing, it is worth experiencing the augmented reality and mixed-reality prototype after virtual reality [64,65].
Figure 6 shows the design concept for participants in the virtual scene with a 3D minimap.
We implemented a virtual urban scene and used a virtual reality head-mounted display to simulate the mixed-reality features. To discuss the exploratory navigation comprehensively, we designed two experimental tasks described in Figure 7. We designed the exploration task and navigation task to simulate the activities in urban exploratory navigation. After each task, the answers to the questionnaires and interviews were recorded. The virtual urban scene was implemented in Unity 3D [66]. We used the open prefab models imported from the Unity Asset Store to implement the Senso-Ji in Tokyo as a virtual urban street environment, as illustrated in Figure 8. We included detailed buildings, landmarks, streets, plants, facilities, and other environmental information. We used Oculus Quest 2 [67] to simulate the mixed-reality scenario outdoors.
In the exploration task, we aimed to measure the performance in terms of the collected points of interest, perceived workload, and user experience in the two map modes. The participant was asked to explore an urban street area. We set a series of points of interest in the virtual scene and marked these points of interest in the minimap interface as well. The participants were asked to explore and arrive at these points of interest in the virtual scene according to the minimap. This task aimed to simulate exploration activities such as check-in behavior in a tourism destination. In another task, the participant needed to utilize the virtual minimap for orientation and navigation to collect stars within a session. We also implemented a scoring system to record the number of collected points of interest. The starting points and the series of marked point-of-interest locations in the two map modes were designed differently to avoid an influence on the latter experiment resulting from the former one. We measured the number of collected points-of-interest as quantitative data, and we used the subjective questionnaire NASA-TLX [34] and game user experience questionnaire [68] as qualitative data for later analysis and discussion.
In the navigation task, we aimed to measure the navigation efficiency (navigation time(s)), perceived workload, and user experience in the two map modes. To avoid the spatial memory of the first map mode task affecting the second one, the starting points and destinations in the two modes’ minimap tasks were different, but the distance and the route complexity of the two map modes’ navigation tasks were the same. The participant was asked to navigate from the starting point to the destination location following the route instruction display on the minimap interface. This task was designed to simulate precise wayfinding navigation activity during exploration. We recorded the navigation time and asked for route descriptions from the participants in two modes; and we used the subjective questionnaire NASA-TLX [34] and the user experience questionnaire [35] as qualitative data for analysis and discussion.

4.2. Participants

We recruited 28 participants (10 females, 18 males) and divided them into two groups for different tasks (14 participants in each group). The average age was 24.5 years; the range was 22–29 years. Most of them (93%) liked exploring points of interest in urban streets in daily life. Only 21% of the participants had a virtual reality experience with a headset. All the participants were given a detailed description of the research and a full explanation of the user study procedure. We also asked them to approve the information used in this study.

4.3. Experimental Procedure

First, we gave each participant a detailed introduction to the whole study and experimental procedure. After obtaining personal information and informed consent, each participant was made to practice the operation via Oculus Quest 2 in the virtual reality scene. They were taught to use two controllers to move in the scene and operate the minimap. Then, when the participant was familiar with the motion control in the virtual reality scene, we instructed him to start the tasks according to the experimental design in Figure 6. The behavioral measurement and subjective measurements were recorded after each task. Moreover, we also asked each participant several questions about their preferences and attitudes toward minimap utilization as a semi-structured interview. All the quantitative data we obtained were analyzed through a non-parametric Wilcoxon signed-rank test [69] at the 0.05 significance level. Figure 9 shows some participants involved in the task, and Figure 10 shows some scenes with minimap interfaces during the experimental task.

4.4. Results

4.4.1. Quantitative Results

  • Result of Exploration Task
Figure 11, Figure 12 and Figure 13 show the behavioral and subjective measurement results of the exploration task.
Figure 11 shows the number of points of interest collected in the exploration task. The number of collected points of interest was significantly (p < 0.005) higher in the simplified mode. The results mean that the participants could arrive at and collect much more points of interest via the simplified-mode virtual minimap. As for the results of the interview, we found that participants also mentioned that the simplified mode could help them navigate the points of interest efficiently, so they could perform better in collecting the points of interest in the simplified mode.
The NASA-TLX trial was conducted immediately after completing each exploration trial, and the overall scores and every independent component score were calculated [34]. In Figure 12, we can see the mean values of overall NASA-TLX scores. Additionally, we can see that participants rated their overall workload scores significantly (p < 0.005) lower in the simplified mode, which indicates that the simplified mode in this work resulted in a lower overall workload score in the exploration task than in the normal mode. However, as for the mean values of independent component scores in the two map modes in Figure 13, we can see that participants only rated the mental demand score of the simplified mode statistically significantly (p < 0.05) lower than that of the normal mode. Although the mean values of all the other independent component scores are lower in simplified mode than in normal mode in Figure 13, the differences are insignificant (p > 0.05). The results mean that the simplified map mode resulted in lower overall workload and mental demand in the exploration task.
  • Results of Navigation Task
Figure 14, Figure 15 and Figure 16 show the behavioral and subjective measurement results of the navigation task.
As shown in Figure 14, in the navigation task, the navigation time in the normal mode was significantly (p < 0.01) higher than in the simplified mode, which means that the simplified mode could assist participants in finding destinations more efficiently, so the participants performed better in simplified mode.
The NASA-TLX trial was conducted immediately after completing each navigation trial. In Figure 15, we can see that the mean value of the overall workload score was significantly (p < 0.05) lower in the simplified mode, and from Figure 16, we can see that participants rated scores of the mental demand and effort independent component significantly lower (p < 0.05) for the simplified mode than the normal mode. The results mean that the simplified map mode resulted in lower overall workload, mental demand, and effort demand in the navigation task.

4.4.2. Qualitative Results

The overall user experience preferences can be seen in Figure 17, Figure 18 and Figure 19. In the exploration task, we used the game user experience questionnaire (GUEQ) [68] to measure the participants’ preferences because we considered that the exploratory navigation is more like the exploratory game. Therefore, we selected four questions with a 5-scale questionnaire related to positive exploration experience and four with a 5-scale questionnaire related to negative experience. All the questions are illustrated in Figure 17 and Figure 18. In brief, we can know from the figure that the preference of each question according to positive game user experience questions was rated higher in simplified mode than normal mode.
As for the negative game user experience items, we can see that the participants gave lower scores for the simplified mode, meaning the negative experience during exploration was lighter than in the normal mode.
We used four questions from the user experience questionnaire (UEQ) [35] to evaluate the participants’ preferences in the navigation task. The mean value of each item rated in the 7-scale questionnaire is illustrated in Figure 19. The overall preference of users in navigation tasks was for the simplified mode.
In the semi-structured interview of the user preference questionnaire, we asked every participant when and where they preferred to use each mode of the minimap. The answers are summarized in Figure 20.
Based on the user experience questionnaire and interview results, most participants showed a preference and positive attitudes toward the simplified 3D minimap, which could assist them in completing the tasks. For example, participant 4 said he preferred to use the simplified mode to assist with locating places in a complex urban context without using turn-by-turn navigation instructions sometimes. Additionally, participant 6 gave a scenario in which she preferred using a simplified 3D minimap to find a specific destination with a rough direction without navigation instructions because she did not want to miss any points of interest by being over-controlled by the navigation system.

5. Discussion

In this section, we present a general discussion based on the results.

5.1. Performance in Exploratory Navigation

In this work, we used point-of-interest exploration and navigation of route planning to simulate different activities in exploratory navigation. Therefore, we designed two experimental tasks and utilized behavioral measurements to evaluate performance in them. In the exploration task, we hope to assist pedestrians in exploring more points of interest in the exploration task via our mixed-reality 3D minimap. The results show that the mean value of the collected points of interest in the simplified map mode experiment session was higher. The numbers of collected points of interest was significantly (p < 0.005) higher via the simplified map. However, in the interview after the tasks, some participants mentioned that the difference between the normal mode and the simplified mode for browsing the valuable points of interest was not big, and the significant difference in collected numbers probably resulted from the high efficiency during wayfinding in exploration tasks via the simplified map mode. We consider this to have been because the mixed-reality 3D minimap could give participants a precise distribution of points of interest from a bird’s-eye view, making it easier to browse many points of interest and improve the wayfinding efficiency in exploration tasks. This is also consistent with the results in the navigation task below.
In the navigation task, unlike the free walking and self-navigation in the exploration task, we set route navigation instructions for two locations. The results showed that the mean value of the navigation time via the simplified mode indicated more efficient navigation to a statistically significant degree (p < 0.01). We supposed that the simplified 3D minimap mode provided clearer route information related to landmarks, buildings, and other points of interest. From the interviews, we also found that the participants thought the simplified 3D map mode was more efficient for orientating and locating, which helped them accomplish precise route navigation.
Based on the results of the two tasks, for Research Question 1, though the results in both tasks were significant, we cannot say that the simplified mixed-reality 3D minimap mode can result in better performance in exploratory navigation. We could suppose that in this work, the simplified mixed-reality 3D minimap mode could result in better efficiency in the specific navigation task. Therefore, we suggest the simplified method of the proposed 3D minimap needs to be further studied in more scenarios. For instance, we only discussed the difference between valuable and irrelevant points of interest in this work. We believe the results will show different things in other simplified criteria in different scenarios. In addition, different navigation scenarios should be considered as well. For example, in our work, the simplified minimap improved the navigation efficiency in both exploration (self-location without turn-by-turn wayfinding instruction) and navigation (assigned turn-by-turn navigation) tasks. Thus, researchers can investigate interfaces that are suitable for different navigation methods in the future.
We supposed that based on our work, we could learn that the mixed-reality minimap has the potential to assist pedestrians in specific scenarios, such as understanding the holistic spatial distribution and point-of-interest information from a bird’s view. However, whether these benefits are required all the time in all the urban exploratory navigation scenarios still needs to be discussed. For instance, the dynamic map interface might satisfy the different needs of pedestrians during urban exploration.

5.2. Perceived Workload

For the NASA-TLX questionnaire results on perceived workload, we found that in both tasks, the overall workload score via the simplified 3D landmark-based minimap was significantly (p < 0.05) lower than that of the normal 3D minimap. However, for the independent component in the exploration task, only the p-value of the mental demand scores of the two modes was significant (p < 0.05). In the navigation task, the mental demand (p < 0.05) and effort (p < 0.05) differences between the two mode maps were statistically significant. We think this is because the self-location and wayfinding in both tasks require higher mental workloads for thinking, deciding, remembering, looking, and searching. The navigation task might require more effort than the free walking in the exploration task. This point was also verified in the interview session.
We also considered the reason that the score differences were non-significant between the two modes for other independent components. We suppose that our task in the experiment was not too difficult in the virtual reality environment compared to the real-world experiment. Therefore, the independent components of physical, temporal demand, performance, and frustration made no significant difference in the two modes. Consequently, we plan to optimize our method and experiment in a more immersive way to simulate a mixed-reality environment and will conduct the outdoor field experiment to explore every independent component of the NASA-TLX workload.
Based on the above discussion, we can answer Research Question 2 and conclude that the simplified 3D minimap mode decreases the overall workload and the mental demand during exploratory navigation.

5.3. User Experience

Overall, the participants gave higher scores in the positive items for simplified mode, but not all the user experience scores in GUEQ and UEQ were statistically significantly different.
From the interview results, we found that the better user experience also benefited from the better performance in navigation. The simplified minimap could help many pedestrians complete the task efficiently. However, for question two (Q2) in GUEQ, for the exploration task, the participants gave a higher score to normal map mode. In other words, most participants considered they could explore things more with normal map mode. Some participants mentioned that the task with the simplified map mode clearly showed the target distribution and route information, so sometimes they only followed the route to arrive at the target. On the contrary, in the task with normal map mode, they needed to recognize the more complicated spatial information, so they paid more attention to every point of interest surrounding them. Another possible reason we found in the interview was that one participant mentioned that he preferred the complete information for exploration and the simplified mode for precise navigation. The simplified map looks like a tool; in contrast, the mirrored normal mode looks like a map in a treasure hunting game.
Therefore, we cannot answer Research Question 3 by concluding that the simplified 3D minimap mode can result in a better user experience in exploratory navigation. The better user experience during exploratory navigation is also affected by the dynamic pedestrian needs and motivation. Thus, the suitable map interface should be decided on based on the navigation needs during urban exploration.

5.4. Other Findings

We also obtained other findings from the experiment. For instance, we found it interesting that all the participants did not refer to the minimap all the time during navigation tasks in both modes. During the interview process, we asked them about this, and they explained that the minimap looks like a sandbox from a bird’s-eye view, which could give them a brief perspective of the surrounding environment. They could establish spatial cognition relatively comprehensively in a short time. We plan to modify our minimap interface to be opened and hidden according to participant’s preferences. Another interesting finding is that when asking about the possible use of this concept in the future, more than one pedestrian mentioned in the interview that the simplified minimap could also be used during running, jogging, bicycling, and other outdoor sports in a specific area of the urban environment. This also inspires us to discuss more specific uses for navigation with this system. We also noticed that some participants mentioned that the minimap could be followed by the motion of the head and eyesight, which inspired us to consider multi-model interaction in the future. In addition, the participants also mentioned that the normal 3D map interface also works in some cases of navigation. For example, the participants said that if the minimap could offer a detailed image of a building from a 3D aerial view, it could help them recognize and identify certain destinations in more complicated environments—for example, an area that has many buildings that have similar appearances and complicated architecture structure. Moreover, one participant suggested that the normal 3D minimap with detailed environmental information could also add the indoor structural information of landmarks and buildings, inspiring us to explore more specific applications of mirrored maps and aerial map modes. Additionally, some participants mentioned that a 3D virtual minimap could also be applied in underground environments where the GPS function might not perform well.

5.5. Limitations and Future Work

Our method and experiments had limitations, which encourages us to modify our design and conduct more user studies in the future. Firstly, we utilized the virtual reality method to perform the outdoor mixed-reality experiment. However, the virtual reality scene we implemented for the experiments was not mirrored and realistic enough to fully simulate the physical environment, which affected our results. Thus, we will implement a more realistic set for outdoor mixed-reality experiments and eventually conduct field experiments in the actual outdoor environment to improve the reliability of the results. Secondly, we did not consider classifying different features in the experiments, such as age, gender, and profession. Moreover, some participants also suggested that they expected to interact with the minimap during navigation more diversely, which inspired us to consider navigation interactions in mixed-reality devices to provide more interaction with digital information during navigation.

6. Conclusions

This work aimed to explore the roles and suitable design of a mixed-reality 3D map interface to enhance pedestrians’ mental satisfaction in urban exploration.
We designed two experiments to measure the performance of exploratory navigation, perceived workload, and user experience with different modes of the minimap. According to the experiment results, overall, participants had a positive attitude toward the mixed-reality 3D minimap concept. We found that the simplified minimap mode can result in lower workload and mental demand, and better performance in specific navigation tasks. However, we could not fully conclude which map mode is suitable for enhancing satisfaction directly because the users’ preferences in minimap mode were dynamic.
Therefore, we summarized future directions that may inspire further studies in this research area. Firstly, what kinds of spatial information should be simplified still needs more discussion in the future. Additionally, the simplified mode we proposed filters the irrelevant information and maintains the 2D base map layer. However, some participants mentioned that this would damage the spatial structure and distribution, which also inspires us to weaken the details of some irrelevant POIs and retain the 3D features such as size and location instead of filtering them directly into the 2D view. In addition, from the interview, we found that other than walking during exploratory navigation in urban streets, the 3D minimap interface could also be used in theme parks and wild forest parks for exploration. Therefore, we advocate further research to discuss more specific uses of mixed-reality maps. Additionally, researchers could focus on combining this kind of map interface with location-based augmented reality and mixed-reality games in the real-world to make pedestrians become players in exploring the real world. Finally, the experimental design in this work did not involve different groups of participants. Therefore, researchers in this area can discuss these issues further.
In the future, we will continue to optimize our methods and experiments to investigate more design insights for map interfaces in mixed-reality navigation. We hope this study inspires other researchers in this area to contribute to designing map interfaces for pedestrian navigation and enrich future navigation design.

Author Contributions

Writing—original draft, Y.Z. and T.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by JST SPRING, grant number JPMJSP2128.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data collected during this research is presented in full in this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anderson, Z.; Jones, M.D. Mobile computing and well-being in the outdoors. In Proceedings of the Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 9–13 September 2019; pp. 1154–1157. [Google Scholar]
  2. Bairner, A. Urban walking and the pedagogies of the street. Sport. Educ. Soc. 2011, 16, 371–384. [Google Scholar] [CrossRef]
  3. Holton, M. Walking with technology: Understanding mobility-technology assemblages. Mobilities 2019, 14, 435–451. [Google Scholar] [CrossRef]
  4. Kim, D.; Kim, S. The Role of Mobile Technology in Tourism: Patents, Articles, News, and Mobile Tour App Reviews. Sustainability 2017, 9, 2082. [Google Scholar] [CrossRef] [Green Version]
  5. Richardson, I.; Wilken, R. Haptic vision, footwork, place-making: A peripatetic phenomenology of the mobile phone pedestrian. Second. Nature: Int. J. Creat. Media 2009, 1, 22–41. [Google Scholar]
  6. Cartwright, W.; Peterson, M.; Gartner, G.; Reichenbacher, T. Adaptation in mobile and ubiquitous cartography. In Multimedia Cartography; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  7. Nayyar, A.; Mahapatra, B.; Le, D.-N.; Suseendran, G. Virtual Reality (VR) & Augmented Reality (AR) technologies for tourism and hospitality industry. Int. J. Eng. Technol. 2018, 7, 11858. [Google Scholar] [CrossRef]
  8. Rokhsaritalemi, S.; Sadeghi-Niaraki, A.; Kang, H.-S.; Lee, J.-W.; Choi, S.-M. Ubiquitous Tourist System Based on Multicriteria Decision Making and Augmented Reality. Appl. Sci. 2022, 12, 5241. [Google Scholar] [CrossRef]
  9. Yung, R.; Khoo-Lattimore, C. New realities: A systematic literature review on virtual reality and augmented reality in tourism research. Curr. Issues Tour. 2017, 22, 2056–2081. [Google Scholar] [CrossRef] [Green Version]
  10. Moro, S.; Rita, P.; Ramos, P.; Esmerado, J. Analysing recent augmented and virtual reality developments in tourism. J. Hosp. Tour. Technol. 2019, 10, 571–586. [Google Scholar] [CrossRef] [Green Version]
  11. Nobrega, R.; Jacob, J.; Coelho, A.; Weber, J.; Ribeiro, J.; Ferreira, S. Mobile location-based augmented reality applications for urban tourism storytelling. In 24º Encontro Português de Computação Gráfica e Interação (EPCGI); IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar]
  12. Yang, C.-C.; Sia, W.; Tseng, Y.-C.; Chiu, J.-C. Gamification of Learning in Tourism Industry: A case study of Pokémon Go. In Proceedings of the ACM International Conference Proceeding Series, Tokyo, Japan, 25–28 November 2018; Association for Computing Machinery: New York, NY, USA; pp. 191–195. [Google Scholar]
  13. Sasaki, R.; Yamamoto, K. A Sightseeing Support System Using Augmented Reality and Pictograms within Urban Tourist Areas in Japan. Isprs Int. J. Geo-Inf. 2019, 8, 381. [Google Scholar] [CrossRef] [Green Version]
  14. Vaittinen, T.; McGookin, D. Uncover: Supporting city exploration with egocentric visualizations of location-based content. Pers. Ubiquitous Comput. 2018, 22, 807–824. [Google Scholar] [CrossRef] [Green Version]
  15. Paavilainen, J.; Korhonen, H.; Alha, K.; Stenros, J.; Koskinen, E.; Mayra, F. The Pokémon GO Experience. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 2493–2498. [Google Scholar]
  16. Zhou, H.; Edrah, A.; MacKay, B.; Reilly, D. Block Party: Synchronized Planning and Navigation Views for Neighbourhood Expeditions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 1702–1713. [Google Scholar]
  17. Papangelis, K.; Metzger, M.; Sheng, Y.; Liang, H.-N.; Chamberlain, A.; Cao, T. Conquering the City: Understanding perceptions of Mobility and Human Territoriality in Location-based Mobile Games. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 90. [Google Scholar] [CrossRef]
  18. Fang, Z.; Li, Q.; Shaw, S.-L. What about people in pedestrian navigation? Geo-Spat. Inf. Sci. 2016, 18, 135–150. [Google Scholar] [CrossRef] [Green Version]
  19. Lee, J.; Jin, F.; Kim, Y.; Lindlbauer, D. User Preference for Navigation Instructions in Mixed Reality. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12–16 March 2022; pp. 802–811. [Google Scholar]
  20. Thi Minh Tran, T.; Parker, C. Designing Exocentric Pedestrian Navigation for AR Head Mounted Displays. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–8. [Google Scholar]
  21. Bujari, A.; Gaggi, O.; Palazzi, C.E. A mobile sensing and visualization platform for environmental data. Pervasive Mob. Comput. 2020, 66, 101204. [Google Scholar] [CrossRef]
  22. Schinke, T.; Henze, N.; Boll, S. Visualization of off-screen objects in mobile augmented reality. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, Lisbon, Portugal, 7–10 September 2010; pp. 313–316. [Google Scholar]
  23. Brata, K.C.; Liang, D. An effective approach to develop location-based augmented reality information support. Int. J. Electr. Comput. Eng. (IJECE) 2019, 9, 9. [Google Scholar] [CrossRef]
  24. Ruta, M.; Scioscia, F.; De Filippis, D.; Ieva, S.; Binetti, M.; Di Sciascio, E. A Semantic-enhanced Augmented Reality Tool for OpenStreetMap POI Discovery. Transp. Res. Procedia 2014, 3, 479–488. [Google Scholar] [CrossRef] [Green Version]
  25. Jylhä, A.; Hsieh, Y.-T.; Orso, V.; Andolina, S.; Gamberini, L.; Jacucci, G. A wearable multimodal interface for exploring urban points of interest. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, DC, USA, 9–13 November 2015; pp. 175–182. [Google Scholar]
  26. Viswanathan, S.; Boulard, C.; Grasso, A.M. Ageing Clouds. In Proceedings of the Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion, San Diego, CA, USA, 24–28 June 2019; pp. 313–317. [Google Scholar]
  27. Besharat, J.; Komninos, A.; Papadimitriou, G.; Lagiou, E.; Garofalakis, J. Augmented paper maps: Design of POI markers and effects on group navigation. J. Ambient. Intell. Smart Environ. 2016, 8, 515–530. [Google Scholar] [CrossRef] [Green Version]
  28. Lobo, M.-J.; Christophe, S. Opportunities and challenges for Augmented Reality situated geographical visualization. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Nice, France, 31 August–2 September 2020; p. V-4-2020. [Google Scholar] [CrossRef]
  29. Lee, L.-H.; Braud, T.; Hosio, S.; Hui, P. Towards Augmented Reality-driven Human-City Interaction: Current Research on Mobile Headset and Future Challenges. ACM Comput. Surv. 2021, 54, 1–38. [Google Scholar] [CrossRef]
  30. Yagol, P.; Ramos, F.; Trilles, S.; Torres-Sospedra, J.; Perales, F. New Trends in Using Augmented Reality Apps for Smart City Contexts. Isprs Int. J. Geo-Inf. 2018, 7, 478. [Google Scholar] [CrossRef] [Green Version]
  31. Maslow, A.H. A theory of human motivation. Psychol. Rev. 1943, 50, 370–396. [Google Scholar] [CrossRef] [Green Version]
  32. Abraham, M. Motivation and Personality; Harper & Row, Publishers: New York, NY, USA, 1954. [Google Scholar]
  33. Maslow, A.H. Toward a Psychology of Being, 2nd ed.; D. Van Nostrand: Oxford, UK, 1968; p. xiii, 240. [Google Scholar]
  34. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research; Elsevier Science & Technology: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  35. Schrepp, M. User Experience Questionnaire Handbook. All You Need to Know to Apply the UEQ Successfully in Your Project 2015. Available online: https://www.ueq-online.org/Material/Handbook.pdf (accessed on 23 October 2022). [CrossRef]
  36. Rehrl, K.; Häusler, E.; Leitinger, S.; Bell, D. Pedestrian navigation with augmented reality, voice and digital map: Final results from an in situ field study assessing performance and user experience. J. Locat. Based Serv. 2014, 8, 75–96. [Google Scholar] [CrossRef]
  37. Perebner, M.; Huang, H.; Gartner, G. Applying user-centred design for smartwatch-based pedestrian navigation system. J. Locat. Based Serv. 2019, 13, 213–237. [Google Scholar] [CrossRef] [Green Version]
  38. Huang, H.; Mathis, T.; Weibel, R. Choose your own route—Supporting pedestrian navigation without restricting the user to a predefined route. Cartogr. Geogr. Inf. Sci. 2021, 49, 95–114. [Google Scholar] [CrossRef]
  39. Ahmadpoor, N.; Shahab, S. Spatial Knowledge Acquisition in the Process of Navigation: A Review. Curr. Urban Stud. 2019, 7, 1–19. [Google Scholar] [CrossRef] [Green Version]
  40. Allen, G.L. Cognitive abilities in the service of wayfinding: A functional approach. Prof. Geogr. 1999, 51, 555–561. [Google Scholar] [CrossRef]
  41. Vaez, S.; Burke, M.; Alizadeh, T. Urban Form and Wayfinding: Review of Cognitive and Spatial Knowledge for Individuals’ Navigation. 2017. Available online: https://research-repository.griffith.edu.au/bitstream/handle/10072/124212/VaezPUB1830.pdf?sequence=1&isAllowed=y (accessed on 23 October 2022).
  42. Li, C. User preferences, information transactions and location-based services: A study of urban pedestrian wayfinding. Comput. Environ. Urban Syst. 2006, 30, 726–740. [Google Scholar] [CrossRef]
  43. Montello, D.R. Spatial cognition. In International Encyclopedia of the Social and Behavioral Sciences; Smelser, N.J., Baltes, B., Eds.; Elsevier: Amsterdam, The Netherlands, 2001; pp. 14771–14777. [Google Scholar]
  44. Allen, G.L. Spatial abilities, cognitive maps, and wayfinding. In Wayfinding Behavior: Cognitive Mapping and Other Spatial Processes; The Johns Hopkins University Press: Baltimore, MD, USA, 1999; pp. 46–80. [Google Scholar]
  45. Google AR. Available online: https://arvr.google.com/ar (accessed on 23 October 2022).
  46. Apple Map. Available online: https://www.apple.com/maps/ (accessed on 23 October 2022).
  47. Carmo, M.; Afonso, A.; Ferreira, A.; Cláudio, A.P.; Silva, G. PoI Awareness, Relevance and Aggregation for Augmented Reality. In Proceedings of the 2016 20th International Conference Information Visualisation (IV), Lisbon, Portugal, 19 July 2016; pp. 300–305. [Google Scholar]
  48. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. J. Comput. Cult. Herit. 2018, 11, 1–36. [Google Scholar] [CrossRef]
  49. Bekele, M.K. Walkable Mixed Reality Map as interaction interface for Virtual Heritage. Digit. Appl. Archaeol. Cult. Herit. 2019, 15, e00127. [Google Scholar] [CrossRef]
  50. Boutsi, A.; Ioannidis, C.; Soile, S. Hybrid mobile augmented reality: Web-like concepts applied to high resolution 3D overlays. In Proceedings of the ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2019, XLII-2/W17, Prague, Czech Republic, 12–19 July 2016; pp. 85–92. [Google Scholar] [CrossRef] [Green Version]
  51. Boustila, S.; Ozkan, M.; Bechmann, D. Interactions with a Hybrid Map for Navigation Information Visualization in Virtual Reality. In Proceedings of the 2020 Conference on Interactive Surfaces and Spaces, Wellington, New Zealand, 20–23 November 2020; pp. 69–72. [Google Scholar]
  52. Khan, N.; Rahman, A.U. Rethinking the Mini-Map: A Navigational Aid to Support Spatial Learning in Urban Game Environments. Int. J. Hum. Comput. Interact. 2017, 34, 1135–1147. [Google Scholar] [CrossRef]
  53. Toups, Z.; Lalone, N.; Alharthi, S.; Sharma, H.; Webb, A. Making Maps Available for Play: Analyzing the Design of Game Cartography Interfaces. ACM Trans. Comput. Hum. Interact. 2019, 26, 1–43. [Google Scholar] [CrossRef]
  54. Zagata, K.; Gulij, J.; Halik, Ł.; Medyńska-Gulij, B. Mini-Map for Gamers Who Walk and Teleport in a Virtual Stronghold. ISPRS Int. J. Geo-Inf. 2021, 10, 96. [Google Scholar] [CrossRef]
  55. Toups, Z.; LaLone, N.; Spiel, K.; Hamilton, B. Paper to Pixels: A Chronicle of Map Interfaces in Games. In Proceedings of the Designing Interactive Systems, New York, NY, USA, 17–19 August 2020; pp. 1433–1451. [Google Scholar]
  56. Xcode. Available online: https://developer.apple.com/xcode/ (accessed on 23 October 2022).
  57. MapKit. Available online: https://developer.apple.com/documentation/mapkit/ (accessed on 23 October 2022).
  58. ARKit. Available online: https://developer.apple.com/jp/augmented-reality/arkit/ (accessed on 23 October 2022).
  59. Hololens. Available online: https://www.microsoft.com/ja-jp/hololens (accessed on 23 October 2022).
  60. König, S.U.; Keshava, A.; Clay, V.; Rittershofer, K.; Kuske, N.; König, P. Embodied Spatial Knowledge Acquisition in Immersive Virtual Reality: Comparison to Map Exploration. Frontiers in virtual reality. bioRxiv 2020, 2, 625548. [Google Scholar] [CrossRef]
  61. Feng, Y.; Duives, D.; Daamen, W.; Hoogendoorn, S. Data collection methods for studying pedestrian behaviour: A systematic review. Build. Environ. 2021, 187, 107329. [Google Scholar] [CrossRef]
  62. Stähli, L.; Giannopoulos, I.; Raubal, M. Evaluation of pedestrian navigation in Smart Cities. Environment and planning. Urban Anal. City Sci. 2021, 48, 1728–1745. [Google Scholar] [CrossRef]
  63. Török, Z.G.; Török, A.; Tölgyesi, B.; Kiss, V. The virtual tourist: Cognitive strategies and differences in navigation and map use while exploring an maginary city. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences-ISPRS Archives, Casablanca, Morocco, 10–11 October 2018; pp. 703–709. [Google Scholar] [CrossRef]
  64. Gushima, K.; Nakajima, T. Virtual Fieldwork: Designing Augmented Reality Applications Using Virtual Reality Worlds; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2021; Volume 12770, pp. 417–430. [Google Scholar]
  65. Gushima, K.; Nakajima, T. A Scenario Experience Method with Virtual Reality Technologies for Designing Mixed Reality Services. In Proceedings of the 2020 16th International Conference on Intelligent Environments (IE), New York, NY, USA, 20–23 July 2020; pp. 122–125. [Google Scholar]
  66. Unity 3D. Available online: https://unity.com/ja (accessed on 23 October 2022).
  67. Oculus Quest 2. Available online: https://store.facebook.com/jp/quest/products/quest-2/ (accessed on 23 October 2022).
  68. IJsselsteijn, W.A.; De Kort, Y.A.; Poels, K. The game experience questionnaire. Eindhoven: Technische Universiteit Eindhoven. 2013. Available online: https://pure.tue.nl/ws/files/21666907/Game_Experience_Questionnaire_English.pdf (accessed on 23 October 2022).
  69. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics; Springer: Berlin/Heidelberg, Germany, 1992; pp. 196–202. [Google Scholar]
Figure 1. Illustration of three different navigation map modes. (A) illustrates the conventional paper maps. (B) illustrates the digital maps via a smartphone. (C) illustrates the mixed reality 3D minimap via a mobile mixed reality head-mounted display.
Figure 1. Illustration of three different navigation map modes. (A) illustrates the conventional paper maps. (B) illustrates the digital maps via a smartphone. (C) illustrates the mixed reality 3D minimap via a mobile mixed reality head-mounted display.
Futureinternet 14 00325 g001
Figure 2. User interface model.
Figure 2. User interface model.
Futureinternet 14 00325 g002
Figure 3. Visualization of points of interest in 3D minimap demo used in user study experiment.
Figure 3. Visualization of points of interest in 3D minimap demo used in user study experiment.
Futureinternet 14 00325 g003
Figure 4. Navigation features in map interface.
Figure 4. Navigation features in map interface.
Futureinternet 14 00325 g004
Figure 5. (Left): normal mixed-reality 3D minimap design; (Right): simplified mixed-reality 3D minimap design.
Figure 5. (Left): normal mixed-reality 3D minimap design; (Right): simplified mixed-reality 3D minimap design.
Futureinternet 14 00325 g005
Figure 6. Implementation description.
Figure 6. Implementation description.
Futureinternet 14 00325 g006
Figure 7. Experimental design.
Figure 7. Experimental design.
Futureinternet 14 00325 g007
Figure 8. This figure shows the virtual urban scene we implemented for user study, Senso-Ji Temple in Tokyo, Japan. (A) shows the overall view of Senso-Ji with various points of interests. (B) shows one of the landmarks named Torii in Senso-Ji. (C) shows several famous places of interest with a long history in Senso-Ji. (D) shows the whole virtual scene of Senso-Ji in orthographic projection perspective.
Figure 8. This figure shows the virtual urban scene we implemented for user study, Senso-Ji Temple in Tokyo, Japan. (A) shows the overall view of Senso-Ji with various points of interests. (B) shows one of the landmarks named Torii in Senso-Ji. (C) shows several famous places of interest with a long history in Senso-Ji. (D) shows the whole virtual scene of Senso-Ji in orthographic projection perspective.
Futureinternet 14 00325 g008
Figure 9. Some participants in the task.
Figure 9. Some participants in the task.
Futureinternet 14 00325 g009
Figure 10. Virtual scenes and minimap interfaces in the task.
Figure 10. Virtual scenes and minimap interfaces in the task.
Futureinternet 14 00325 g010
Figure 11. Box plot of collected points of interest in an exploration task with two map modes.
Figure 11. Box plot of collected points of interest in an exploration task with two map modes.
Futureinternet 14 00325 g011
Figure 12. Box plot of overall workload scores in an exploration task with two map modes.
Figure 12. Box plot of overall workload scores in an exploration task with two map modes.
Futureinternet 14 00325 g012
Figure 13. Box plot of scores of each independent component of NASA-TLX in the navigation task with two map modes.
Figure 13. Box plot of scores of each independent component of NASA-TLX in the navigation task with two map modes.
Futureinternet 14 00325 g013
Figure 14. Box plot of navigation times in the navigation task with two map modes.
Figure 14. Box plot of navigation times in the navigation task with two map modes.
Futureinternet 14 00325 g014
Figure 15. Box plot of overall workload scores in the navigation task with two map modes.
Figure 15. Box plot of overall workload scores in the navigation task with two map modes.
Futureinternet 14 00325 g015
Figure 16. Box plot of scores of each independent component of NASA-TLX in the navigation task with two map modes.
Figure 16. Box plot of scores of each independent component of NASA-TLX in the navigation task with two map modes.
Futureinternet 14 00325 g016
Figure 17. User preferences on positive questions of GUEQ in the exploration task.
Figure 17. User preferences on positive questions of GUEQ in the exploration task.
Futureinternet 14 00325 g017
Figure 18. User preferences based on negative questions of GUEQ in exploration task.
Figure 18. User preferences based on negative questions of GUEQ in exploration task.
Futureinternet 14 00325 g018
Figure 19. User preferences based on selected questions of UEQ in navigation task.
Figure 19. User preferences based on selected questions of UEQ in navigation task.
Futureinternet 14 00325 g019
Figure 20. Semi-structured interview of user preferences.
Figure 20. Semi-structured interview of user preferences.
Futureinternet 14 00325 g020
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Nakajima, T. Exploring the Design of a Mixed-Reality 3D Minimap to Enhance Pedestrian Satisfaction in Urban Exploratory Navigation. Future Internet 2022, 14, 325. https://doi.org/10.3390/fi14110325

AMA Style

Zhang Y, Nakajima T. Exploring the Design of a Mixed-Reality 3D Minimap to Enhance Pedestrian Satisfaction in Urban Exploratory Navigation. Future Internet. 2022; 14(11):325. https://doi.org/10.3390/fi14110325

Chicago/Turabian Style

Zhang, Yiyi, and Tatsuo Nakajima. 2022. "Exploring the Design of a Mixed-Reality 3D Minimap to Enhance Pedestrian Satisfaction in Urban Exploratory Navigation" Future Internet 14, no. 11: 325. https://doi.org/10.3390/fi14110325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop