Next Article in Journal
Mind the Move: Developing a Brain-Computer Interface Game with Left-Right Motor Imagery
Next Article in Special Issue
Insights into the Predictors of Empathy in Virtual Reality Environments
Previous Article in Journal
Ensemble System of Deep Neural Networks for Single-Channel Audio Separation
Previous Article in Special Issue
Intelligent Augmented Reality for Learning Geometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial Knowledge Acquisition for Pedestrian Navigation: A Comparative Study between Smartphones and AR Glasses †

1
LAMIH, UMR CNRS 8201, Université Polytechnique Hauts-de-France, 59313 Valenciennes, France
2
School of Computing, University of Kent, Canterbury CT2 7NT, Kent, UK
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI’2020, Virtual Conference, 5–9 October 2020.
Information 2023, 14(7), 353; https://doi.org/10.3390/info14070353
Submission received: 22 March 2023 / Revised: 9 June 2023 / Accepted: 17 June 2023 / Published: 21 June 2023
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)

Abstract

:
Smartphone map-based pedestrian navigation is known to have a negative effect on the long-term acquisition of spatial knowledge and memorisation of landmarks. Landmark-based navigation has been proposed as an approach that can overcome such limitations. In this work, we investigate how different interaction technologies, namely smartphones and augmented reality (AR) glasses, can affect the acquisition of spatial knowledge when used to support landmark-based pedestrian navigation. We conducted a study involving 20 participants, using smartphones or augmented reality glasses for pedestrian navigation. We studied the effects of these systems on landmark memorisation and spatial knowledge acquisition over a period of time. Our results show statistically significant differences in spatial knowledge acquisition between the two technologies, with the augmented reality glasses enabling better memorisation of landmarks and paths.

1. Introduction

Mobile interactive systems have become common tools in assisting pedestrian mobility. For example, more and more pedestrians are using smartphones equipped with navigation software, particularly to navigate unfamiliar places. Moreover, continuous research in the area of pedestrian navigation explores novel ways that navigation guidance is conveyed to the users in order to improve usability [1,2,3,4,5,6,7]. Although, such tools are considered very useful, several studies have shown that this form of passive assistance does not help pedestrians memorise journeys nor does it help them become familiar with the environment [8,9,10]. Indeed, on smartphones, common navigation systems support wayfinding using directive instructions [11]. As shown by Gardony and colleagues, such systems do not provide opportunities to develop spatial skills, and they tend to reduce spatial awareness [12,13]. These effects can have a broader negative impact on the cognitive abilities of users, especially when they are used frequently. Studies involving taxi drivers [10] have demonstrated that enhanced navigation skills are positively correlated with increased activity and gray matter in the hippocampus in people’s brains. Konishi and Bohbot also showed that reduced navigational skills can contribute to cognitive decline during normal aging [9].
Considering the potentially negative impact of traditional smartphone navigation in cognitive abilities [14], our aim is to explore how pedestrian navigation systems, operated either through smartphones or augmented reality (AR) glasses, can be used to help users memorise a route and enable them to navigate without the need for navigation systems. This paper reports on the design, methodology, and results of a user study with 20 participants focusing on mobile pedestrian navigation systems and their effects in journey memorisation. In this study, we investigate and compare the effects of smartphone-based navigation vs. AR (augmented reality)-glasses-based navigation.
As reported by Ruginski and their colleagues [15], traditional map-based automated navigation assistance can have a negative effect on spatial transformation abilities. They observed a negative impact in the case of changes in perspective and on environmental learning [15]. Indeed, a study by Meneghetti and colleagues has shown that navigation learning involves only visuospatial abilities, and not verbal ones [16]. This important result explains why automated navigation systems that do not expand the user’s visuospatial skills lead to degradation in spatial knowledge acquisition [17].
For several years, researchers have been interested in how to integrate landmarks into the design of navigation systems. It is considered that landmark-based navigation systems can help improve the user’s knowledge of the environment during wayfinding tasks [18]. Work from Montello [19] showed that landmarks are salient entities in the environment, and they can help improve the user’s survey knowledge during wayfinding tasks: the representation of knowledge about a specific location goes progressively from knowing landmarks, then paths, and finally to a global survey knowledge. Our work is motivated by these findings. Our aim is to study the effects of landmark-based navigation systems, deployed on smartphones or AR glasses, in helping users learn how to navigate the environment without the need for wayfinding technology, ultimately.
The spatial knowledge acquisition in pedestrian navigation has been investigated by a limited number of studies, comparing different interaction modalities or devices.

1.1. Comparative Studies of Pedestrian Navigation Technologies

Several studies have attempted to compare alternative navigation systems for pedestrian navigation but without exploring their impact on path memorisation. A study by Rehrl et al. [20] compared the use of digital maps, voice, and augmented reality interfaces implemented through smartphones for navigation assistance. They found that AR (through smartphones) was less effective in users’ navigation demonstrating that the physical characteristics of smartphones can have a negative effect on the usability of AR interfaces for navigation assistance. Exploring image-based navigation, the work in [21] proposed the design of an augmented photograph-based application on a smartphone and contrasted it to a traditional map-based mobile application. This work showed that the users experienced a lower demand to constantly look at the image-based system, helped by an alert mechanism that notified them when a new photo message was triggered. This approach highlights the value of applying less distracting designs for pedestrian navigation, offering more time to the users to focus on the environment. The role of photography in navigation was also studied in [22]. The work by Wen et al. [23] explored the effects of five different navigation visualisation approaches on wayfinding: traditional north-up maps, forward-up maps, compass, augmented reality, and radar. Their study explored interface performance and effectiveness for pedestrian navigation. Unfortunately, the work did not investigate how the different visualisation approaches affected the route memorisation for the users.

1.2. Spatial Knowledge Acquisition with Pedestrian Navigation Technologies

In [24], the authors compared four navigation system variations implemented on tablets, that differed in their level of automation and attention demand. Automation in that context considered notifications on path finding, being triggered automatically by the application or triggered manually on user request. The authors explained that “participants using systems with higher levels of automation seemed not to acquire enough spatial knowledge to reverse the route without navigation errors”. Amirian et al. [25] designed a tablet-based AR navigation system that relied on automated landmark recognition, with superimposition of navigational signage (orientation arrows and distance information). This system was compared to turn-by-turn map-based navigation system. Their system demonstrated significant improvements in acquiring local knowledge. These results further highlight the limited effects of map-based navigation with respect to spatial knowledge acquisition. More recently, a number of studies have explored the use of landmarks for pedestrian navigation, coupled with different interaction modalities, aiming to improve the acquisition of road knowledge and thus promote autonomy in orientation [26,27].
The work by Liu et al. [28] focused on the use of landmarks in an indoor environment. In that work, they explored the use of iconic holograms in a mixed-reality environment supported by hololens glasses. Augmented landmarks were employed to address the challenge that in indoor environments, corridors and rooms may look alike, making it difficult for people to find physical landmarks to help them navigate. The results showed that virtual semantic landmarks assisted the acquisition of corresponding knowledge, as these landmarks were the second most often labelled in the landmark locating task [28].
A number of studies explored the effects of user interaction with the navigation system and their impact on spatial knowledge acquisition. In [29], the findings suggested that higher user engagement with the navigation system correlated with better spatial knowledge acquisition. Huang and colleagues [30] explored the effects of smartphone map-based navigation through different interaction modes: (1) visual map, (2) voice, or (3) augmented reality (through the smartphone). They found no significant difference in spatial knowledge acquisition in the context of map-based navigation.
Kamilakis and colleagues [31] proposed a mobile application, which addressed the practical requirements of public transport users (visualisation of nearby transit stops along with the timetable information of transit services passing by those stops). In their paper, they studied the utility and experience perceived by users interacting with mobile augmented reality (MAR) vs. map-based mobile application interfaces. MAR has been shown to offer an enjoyable intuitive interaction model. This demonstrated its potential for directly linking digital content with the user’s physical environment, thereby enabling the experiential exploration of the surrounding elements. The offering of an improved sense of orienteering relatively to surrounding physical elements (e.g., unambiguous interpretation of the direction towards a Point of Interest) should be regarded as another strong aspect of sensor-based MAR applications. The authors concluded that MAR interfaces still need to resolve major usability issues until they can be regarded as an indisputable substitute for traditional map-based interfaces. Most likely, emerging devices such as smart glasses (which involve principally different methods for interacting with digital content) would affect the quality of the experience perceived by the users. In the study [8], GPS (Global Positioning System) users traveled longer distances and made more stops during a walk than map users and direct-experience participants. Moreover, GPS users traveled more slowly, made larger direction errors, drew sketch maps with poorer topological accuracy, and rated wayfinding tasks as more difficult than direct-experience participants. In another study [32], the authors proposed a novel system for pedestrian navigation assistance that included global landmarks (for example, the Eiffel Tower) in navigation instructions. They found a better performance, as compared to turn-by-turn navigation systems, in terms of navigating the environment and building a more accurate mental map.
Reflecting on the existing research on pedestrian navigation systems, we can observe that there has been extensive work on map-based systems and their negative effects on spatial knowledge acquisition. With respect to AR systems, the current work focuses primarily on AR through smartphones or tablets. Although these studies demonstrate that landmark-based navigation can improve the acquisition of spatial knowledge, there is no work that has explored the effects of AR glasses as the interaction medium for navigation. To the best of our knowledge, no studies have addressed the spatial knowledge acquisition using AR glasses and smartphones with a landmark-based navigation system. Furthermore, when considering the approaches for assessing spatial knowledge, there is limited work regarding the effects in spatial transformations (i.e., the ability to mentally change the perspective of space), which is positively correlated with environmental learning during memory tests.
The next section presents the methodology we followed for the study, including the navigation path definition, the navigation system design, and the data capture. The results concerning the memory tests are presented in Section 3. The paper ends with a discussion of the obtained results and a conclusion.

2. Materials and Methods

2.1. Path Definition and Landmark Selection

The selection of the location for conducting the experiment can have a significant effect on the results. Our goal was to select a path in an area unfamiliar to the participants, without very distinctive landmarks. It should present a level of challenge for people to navigate through without technology. A residential area was identified following informal surveys of the authors’ social network. The location is within Canterbury, a small British town, away from popular walking routes. This area is described as “difficult to navigate” by local residents. It consists of similar looking residential buildings (Figure 1 and Figure 2).
By relying on local knowledge, 12 residents (6 females) were asked to fill out a questionnaire in order to identify the relevant landmarks in their local area and thus to help define a landmark-based navigation path. Questions that were asked included: “A person new to this area is asking you for guidance; what landmarks would you use to guide them?”, “Please suggest a path between location A (departure) and B (destination) and highlight what landmarks you would use for guidance” (used for multiple A–B combinations). This survey allowed using local knowledge to select a path and define the landmarks that would make sense for a pedestrian in that area. The categories of landmarks identified by local residents are illustrated in Figure 3. Combining this information with the suggested navigation advice by local residents, the path was defined within that area, along with the relevant landmarks for navigation (Figure 4).

2.2. Navigation App

Relying on the collected information including identified landmarks, a navigation mobile application was constructed to provide assistance for users finding their way through the defined path. The application was designed to offer the same type of functionality and similar visual clues on both smartphones and AR glasses (see Figure 5 and Figure 6). The application displayed suitable navigation advice when a user approached a decision point (Figure 7); it was based on the current location of the user retrieved through GPS tracking.
The design of the application was driven by prior research in the areas of cognitive mapping, wayfinding, and route instructions design [33,34,35]. Navigation routes are often explained as a sequence of turns or changes in direction during decision points in a navigation path. The navigation application that was developed for this study involved the delivery of precision textual guidance with an orientation indicator, which was triggered when the user approached decision points. Landmarks were included in the guidance instructions to give the most relevant information. The aim was to reduce the cognitive load that was required for the user to engage with the application, allowing them to focus more on their surroundings and giving them the opportunity to build their own perception of the environment. Further design recommendations were considered while designing the AR glasses application. For instance, the display colour was selected to be green [36], and information was positioned at the bottom center of the in-glass screen, to support the highest comprehension [37] and to ensure a clear view of the walking path.
The navigation assistance was manually constructed by including navigation advice using landmarks. Different decision points from the navigation path were connected and encoded into the application from the previous information. These points consisted of where the user was expected to decide on any changes in their direction. The application offered relevant prompts, with references to local landmarks, visible at each decision point. Figure 8 illustrates a sequence of instructions displayed on the AR glasses when the user was approaching a decision point. The described process was the same on the smartphone.
The system showed navigation advice highlighting the selected landmark when the user approached a decision point (see Figure 8, number 3). After confirming that the user took the correct decision using GPS tracking, the system updated the targeted decision point with the next point.

2.3. Study Design

The study included 20 participants (10 females) with an age range of 21–39 (median: 25.5). The experiment procedure is illustrated in Figure 9 (adapted from [38]). The participants were split into two groups: the SP (Smartphone) group was given the smartphone version of the navigation system to use during their first walk, and the ARG (Augmented Reality Glasses) group was given the AR glasses version. All participants were asked to take the Santa Barbara Sense of Direction Scale (SBSOD) test [39]. When splitting the groups, a similar distribution of SBSOD scores for both groups was ensured. First, each participant was allowed 3–4 min of becoming familiar with the technology. AR glasses are a novel technology, and this can affect the perception and interaction of the user with them. Although the familiarisation task may not have been sufficient to avoid the novelty factor completely, it ensured that participants felt comfortable with them.
Each participant navigated through the path following the guidance provided by the system. They were asked to fill out a usability questionnaire, the SUS (System Usability Scale) [40,41], before and after performing the first walk.
At the end, they answered a memory test by plotting the path and marking the memorised landmarks (see Figure 10). On the left-hand side of the map shown in Figure 10, we intentionally made visible a list of examples of landmarks, so that the notion of a landmark was clear and illustrated for each participant. This list of 10 examples did not include the nine landmarks used in this study. For example, the Woody’s Bar or Phone Booth on our path were not mentioned in this list. Conversely, our path did not include, for example, a stream, river, or nursery.
A second walk was planned after one week, and it involved two memory tests, before and after the walk. Participants were not given an explicit interdiction for visiting the area between both walks. However, they were made aware of the importance of avoiding the area during the entire duration of the study and how it might affect the memorisation results.

2.4. Data Capture

The experimental protocol, as described earlier, consisted of asking each participant to walk through the same path twice: the first time, using navigation assistance (an application installed on a smartphone or AR glasses) and the second time, without using any device. Several datasets were collected during the study to address the defined research questions.
Each participant was asked to answer the SUS questionnaire, before and after using the technology for the walk. This subjective assessment of the technology could help to explore the correlations with the effects of spatial knowledge acquisition.
The Santa Barbara Sense of Direction Scale (SBSOD) survey offers an indication of the inherent sense of direction of each participant. The SBSOD questionnaire was used to evaluate the spatial abilities of the participants. Based on these results, participants were allocated to the two groups to ensure a balanced representation of abilities. The findings, using a Welch’s t test, showed no significant difference between the two groups in terms of the spatial navigation and sense of orientation abilities.
A key dataset in this experiment involved a number of memory tests that participants were asked to perform after the first walk and a week later, both before and after they performed the walk without any technology. The aim of the memory test was to capture the participant’s recall of the landmarks and to test their abilities in spatial transformation as an indication of spatial knowledge acquisition [15]. Specifically, participants were asked to plot their walk on a map, as a way of mentally changing their perspective of the environment (transformation), and to indicate the location and type of landmarks along the path (see Figure 10).

3. Results

The primary objective of this study was to test the hypothesis: “After using landmark-based navigation technologies, the AR Glasses’ experience supports landmark memorisation more effectively than Smartphones”.

3.1. Santa Barbara Sense of Direction Scale (SBSOD)

The SBSOD is a 15-item attitude Likert scale [39]. For each item, there are seven possible responses that range from Strongly agree to Strongly disagree. The fifteen items are shown in Table 1. The SBSOD provides a score from 1 to 7 for each statement, and the highest score that can be reached is 105 (7 × 15 statements). Following previous work [8,42], we reversed the positively stated questions to negatively stated ones so that a higher score meant a better sense of direction.
Table 2 and Table 3 present the results of the SBSOD questionnaire. By using the Welch’s t test on this data, no significant difference was found between both groups (t = 1.7, p > 0.0991), where the SBSOD score for the ARG group was (M = 69, SD = 8.1976), and the SBSOD score for the SP group was (M = 61.5, SD = 9.9725).

3.2. System Usability Scale (SUS)

This questionnaire helps to obtain a subjective estimation of the system’s usability. The SUS is a ten-item attitude Likert scale [40,41]. For each item, there are five possible responses that range from Strongly agree to Strongly disagree. The ten items are shown in Table 4. The SUS provides a score from 0 to 100.
Table 5 illustrates the descriptive statistics of the SUS results for both groups before and after the use of the navigation system. It shows that participants in both group gave a better score after using the technology.
Our aim was also to detect whether the usability problems, expressed via the SUS questionnaire (or noted through direct observation), could be disruptive or even blocking for some users, with either of the two devices. Although the scores varied from one user to another, no such problems were detected.

3.3. Memory Tests

In this section, the memorisation results are analysed following three steps: (1) landmarks, (2) path segments, and (3) the global memorisation score including the landmarks and path segments.

3.3.1. Landmark Memorisation

Figure 10 illustrates the results of one memory test with a plotted path and the landmarks located along this path. To evaluate the performance of the participants, a memorisation score for the landmarks (MS_L) was calculated based on the identified landmarks; three values were possible for each landmark (0: not located at all, 1: incorrectly located, and 2: correctly located). For example, Table 6 shows the results of participant SP8 (using a smartphone), who succeeded in recalling four landmarks out of nine, in the first test, where two of the landmarks were positioned correctly on the map. The complete results are illustrated in Table 7 and Table 8.
With the Welch’s t test, no significant difference was found between both groups except for the second test (t = 2.1897, p < 0.0442), where the score of the located landmarks for the ARG group (M = 8.6, SD = 2.4994) was higher than the score of the located landmarks for the SP group (M = 6.1, SD = 2.0712).
Figure 11 illustrates the results of the landmark memorisation analysis. It shows that the participants in the ARG group performed better in terms of memorisation compared to the SP group. Thus, the AR glasses supported landmark memorisation more effectively than smartphones.

3.3.2. Path Segment Memorisation

The next step was to consider the accuracy of the plotted path by calculating a memorisation score MS_S of recalled segments (each segment was located between two decision points, i.e., between two landmarks). Table 9 illustrates the corresponding score for participant SP8 (1: correct segment, 0: incorrect segment). Table 10 and Table 11 illustrate the results of both groups. By using the Welch’s t test on these data, no significant difference was found between both groups in terms of memorising the path segments. Figure 12 illustrates these results that show no significant difference between both devices in supporting the path segment memorisation.

3.3.3. Landmark and Path Segment Memorisation Combined

Both results obtained from the previous analyses (landmarks and segments) were combined to calculate a global memorisation score, MS. (see Table 12 and Table 13). The scaling method, described below, was used to bring the results to the same range between 0 and 5. Then, the MS was calculated by aggregating the scaled scores, and thus, the MS was in the range of 0 to 10 (lowest and highest score of global memorisation, respectively).
  • r m i n , r m a x denote the minimum and the maximum of the range of the measurement, respectively;
  • t m i n , t m a x denote the minimum and the maximum of the range of the desired target scaling, respectively.
s c a l e d _ s c o r e = s c o r e r m i n r m a x r m i n ( t m a x t m i n ) + t m i n
By using the Welch’s t test, a significant difference was found between both groups for the second and third memory tests. The obtained results for these two tests were (M. Test 2: t = 2.4, p < 0.0279) and (M. Test 3: t = 2.6, p < 0.0195), respectively, where the global MS for the ARG group (M. Test 2: M = 6.821, SD = 1.6039) and (M. Test 3: M = 7.16, SD = 1.3205) was larger than the global score for the SP group (M. Test 2: M = 5.141, SD = 1.1672) and (M. Test 3: M = 5.611, SD = 1.1088).
Figure 13 illustrates the results of the memorisation score (MS). It shows that the participants in the ARG group performed better in terms of path memorisation compared to the SP group. Thus, the AR glasses supported the path (landmarks + segments) memorisation more effectively than the smartphones.

4. Discussion

Traditional navigation aid systems are known for being less efficient when it comes to spatial knowledge acquisition. This paper aimed to explore the use of landmark-based assistance, as stated by other researchers [18,43], and determine its benefit for memorising paths and survey knowledge. Its objective was to investigate AR glasses and smartphones for pedestrian navigation and their effects on spatial knowledge acquisition. The results indicated that people using AR glasses, performed better in terms of landmark memorisation, compared to those using smartphones. These results are promising.
Concerning the methodology, Liu and colleagues [28] investigated the effects of spatial knowledge acquisition derived from sketch maps and landmark locating tasks. Their study involved testing the ability of participants to successfully navigate to a specific destination and return back, without the support of technology, relying mainly on landmarks. Our methodology was influenced by this work, involving tasks of positioning landmarks and guidance information on a map.
With respect to memorisation, Lu et al. compared the memorisation of participants in three conditions [27]. During the experiment within a VR environment, participants were expected to memorise landmarks. Immediately after, memory tests were performed using photos taken within the VR environment. Effectively, in their experiments, they assessed the short-term memory acquisition immediately after the activity. In our study, we extended our approach to assessing memorisation, by including an analysis of memorisation tests one week later. This involved the physical repetition of the trip without navigation guidance followed by related memorisation tests. We believe that our approach delivers more robust evidence of the memorisation of landmarks over longer periods of time.
The work by Afrooz et al. focused on the comparison between active navigation and passive navigation to assess their effects on spatial and visual memory during wayfinding [44]. In this work, the authors asked participants to navigate through a predefined route within a university campus, either by following the experimenter (passive navigation) or by leading them (active navigation). The authors did not employ a virtual environment to investigate wayfinding (as commonly performed in other similar works) in order to ensure the direct response of any findings to the real physical environment [45]. A university campus rather than a city was selected to control confounding variables and avoid factors affecting the data collection process, such as crowds and noise. Knowledge acquisition was assessed through a range of tests involving scene recognition, recollection of the left–right orientation of scenes (mirror image discrimination), and route recollection (sketch maps). However, all the memory tests were performed immediately after the navigation tasks, allowing the analysis of short-term memory acquisition. In our work, we explicitly attempted to assess both the short-term and longer-term (a week later) effects of landmark memorisation following the use of wayfinding technologies.
Reflecting on the original hypothesis for this study: “After using navigation technologies, the AR Glasses’ experience supports landmark memorisation more effectively than Smartphones”, the memory tests analysis showed that AR glasses offered better support for landmark memorisation and also for survey knowledge acquisition after combining both memorisation scores. We acknowledge that the results of this study are based on a limited scale experiment, performed within a single location. In that respect, we consider that further studies can potentially allow for a deeper understanding and more thorough comparison of the effects of the two modalities under different conditions.
Based on the outcomes of our study, we can hypothesise about the factors that may have led to these results. One clear advantage of connected glasses may be that the user is able to constantly look at their surroundings during their walk (therefore, also landmarks), without being severely disrupted by any digital information that may be displayed via augmented reality. With a smartphone, on the other hand, the user’s gaze often shifts from the environment to the smartphone. One hypothesis, therefore, is that being able to perceive the environment along the way makes it easier to memorise the environment and its constituent elements (landmarks). In a complementary study involving users with intellectual disabilities, various participants expressed such a view. For instance, one participant explained: “I rather choose to use these AR Glasses than my smartphone with Google Maps. At least, these glasses are in front of my eyes and wouldn’t need to look down. If I look all the time at my smartphone, I won’t be able to see the street. They will serve when I go to unfamiliar places.” [46].
Comparison with Previous Work and Contribution. The main contribution of this work was to investigate the effects of AR glasses compared to smartphones on spatial knowledge acquisition. This research question has not been addressed before using a similar approach. Previous studies have compared navigation devices without evaluating the memorisation task, and for those who did, the AR experience was limited to smartphone support.
Our results show that AR glasses can be a suitable support to provide active landmark-based assistance for pedestrian navigation. It offers a better experience favouring spatial knowledge acquisition. It is legitimate to expect that using similar systems will help people to gain improved spatial awareness and create more autonomy for their mobility.
Limitations and Future Studies. The generalisability of the results is limited by having two groups with different sizes for the memory test. In addition, our study considered navigating two groups of people through one single path. Despite these limitations, the obtained results are valid to answer the research question and support our hypothesis. The statistical method used was insensitive to the difference in group sizes. Moreover, the navigation path was identified following a rigorous method based on local residents’ knowledge and expertise in the area.

5. Conclusions

Regarding the negative effects of passive assistance offered by most of the current navigation systems, the use of landmarks by future systems lets us envisage new perspectives for pedestrians. With this aim, new types of technologies have to be designed and evaluated.
This paper has described our contribution through a comparative study, conducted at the University of Kent, of two navigation technologies (smartphones and AR glasses) that can facilitate path memorisation. The participants of each of the two groups were asked to walk through the same path twice, the first one using navigation assistance installed on one of the devices and the second time without any device. The SBSOD questionnaire was initially used to evaluate the spatial abilities of the participants, who were equally allocated to the two groups. Each participant had to perform memory tests about the landmarks used and the pathway followed after the first walk and a week later, before and after they performed the walk without any assistance. The results obtained showed that the performance of the participants was better in terms of path memorisation using AR glasses rather than smartphones.
These promising results can open up opportunities for further research in this area. Specifically, we consider expanding this work with larger-scale studies involving more participants, while diversifying the physical settings for our experiments (i.e., longer paths with more decision points). Furthermore, user demographics can have a significant effect, including age and occupation (e.g., delivery person or taxi driver). Finally, the way information is presented to users would require further studies to identify the most appropriate visualisation mode to maximise memorisation.
A direction of particular importance includes the study of the effects of such systems on people with certain physical or cognitive disabilities both in outdoor environments (considered for instance in [5,7,47]) and also in indoor contexts (see, for instance, [7,48,49,50]. For instance, people with intellectual disabilities could take advantage of these landmark-centered systems to move around with more autonomy [51,52].

Author Contributions

Conceptualization, A.L., C.K., S.L. and C.E.; methodology, A.L., C.K., S.L. and C.E.; software, A.L.; validation, A.L., P.N. and C.E.; formal analysis, A.L.; data curation, A.L.; writing—original draft preparation, A.L., C.K., S.L. and C.E.; writing—review and editing, A.L., C.E., C.K., S.L. and P.N.; visualization, A.L.; supervision, C.K., C.E. and S.L.; funding acquisition, A.L., C.K., S.L. and C.E. All authors have read and agreed to the published version of the manuscript.

Funding

I-SITE Univ. Lille-Europe (mobility grants: EWAID and Go-Smart).

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the Département du Nord and the Agence Nationale pour la Rénovation Urbaine (ANRU) for their financial support within the framework of the ValMobile project as well as the PRIMOH research pole. They thank the I-SITE Univ. Lille-Europe (mobility grants: EWAID & Go-Smart). They also thank all the participants of the study. They are also grateful to the reviewers for their many constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. von Jan, V.; Bertel, S.; Hornecker, E. Information Push and Pull in Tactile Pedestrian Navigation Support. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI ’18), Barcelona, Spain, 3–6 September 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 55–62. [Google Scholar] [CrossRef]
  2. Morais, I.; Condado, M.; Quinn, R.; Patel, S.; Morreale, P.; Johnston, E.; Hyde, E. Design of a Contextual Digital Wayfinding Environment. In Proceedings of the Design, User Experience, and Usability. Application Domains; Marcus, A., Wang, W., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 221–232. [Google Scholar]
  3. Akpinar, E.; Yeşilada, Y.; Temizer, S. The Effect of Context on Small Screen and Wearable Device Users’ Performance—A Systematic Review. ACM Comput. Surv. 2020, 53, 1–44. [Google Scholar] [CrossRef]
  4. da Fonseca, F.P.; Conticelli, E.; Papageorgiou, G.; Ribeiro, P.; Jabbari, M.; Tondelli, S.; Ramos, R. Use and Perceptions of Pedestrian Navigation Apps: Findings from Bologna and Porto. ISPRS Int. J. Geo Inf. 2021, 10, 446. [Google Scholar] [CrossRef]
  5. Tachiquin, R.; Velázquez, R.; Del-Valle-Soto, C.; Gutiérrez, C.A.; Carrasco, M.; De Fazio, R.; Trujillo-León, A.; Visconti, P.; Vidal-Verdú, F. Wearable Urban Mobility Assistive Device for Visually Impaired Pedestrians Using a Smartphone and a Tactile-Foot Interface. Sensors 2021, 21, 5274. [Google Scholar] [CrossRef] [PubMed]
  6. Ruginski, I.; Giudice, N.; Creem-Regehr, S.; Ishikawa, T. Designing mobile spatial navigation systems from the user’s perspective: An interdisciplinary review. Spat. Cogn. Comput. 2022, 22, 1–29. [Google Scholar] [CrossRef]
  7. Prandi, C.; Barricelli, B.R.; Mirri, S.; Fogli, D. Accessible wayfinding and navigation: A systematic mapping study. Univers. Access Inf. Soc. 2023, 22, 185–212. [Google Scholar] [CrossRef]
  8. Ishikawa, T.; Fujiwara, H.; Imai, O.; Okabe, A. Wayfinding with a GPS-based mobile navigation system: A comparison with maps and direct experience. J. Environ. Psychol. 2008, 28, 74–82. [Google Scholar] [CrossRef]
  9. Konishi, K.; Bohbot, V.D. Spatial navigational strategies correlate with gray matter in the hippocampus of healthy older adults tested in a virtual maze. Front. Aging Neurosci. 2013, 5, 1–8. [Google Scholar] [CrossRef] [Green Version]
  10. Maguire, E.A.; Gadian, D.G.; Johnsrude, I.S.; Good, C.D.; Ashburner, J.; Frackowiak, R.S.; Frith, C.D. Navigation-related structural change in the hippocampi of taxi drivers. Proc. Natl. Acad. Sci. USA 2000, 97, 4398–4403. [Google Scholar] [CrossRef] [Green Version]
  11. Wiener, J.M.; Buchner, S.J.; Holscher, C. Taxonomy of human wayfinding tasks: A knowledge-based approach. Spat. Cogn. Comput. 2009, 9, 152–165. [Google Scholar] [CrossRef]
  12. Gardony, A.L.; Brunyé, T.T.; Mahoney, C.R.; Taylor, H.A. How Navigational Aids Impair Spatial Memory: Evidence for Divided Attention. Spat. Cogn. Comput. 2013, 13, 319–350. [Google Scholar] [CrossRef]
  13. Dahmani, L.; Bohbot, V.D. Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Sci. Rep. 2020, 10, 6310. [Google Scholar] [CrossRef] [Green Version]
  14. Lakehal, A.; Lepreux, S.; Letalle, L.; Kolski, C. From wayfinding model to future context-based adaptation of HCI in Urban Mobility for pedestrians with active navigation needs. Int. J. Hum.-Comput. Interact. 2021, 37, 378–389. [Google Scholar] [CrossRef]
  15. Ruginski, I.T.; Creem-regehr, S.H.; Stefanucci, J.K.; Cashdan, E. GPS use negatively affects environmental learning through spatial transformation abilities. J. Environ. Psychol. 2019, 64, 12–20. [Google Scholar] [CrossRef] [Green Version]
  16. Meneghetti, C.; Zancada-menéndez, C.; Sampedro-piquero, P.; Lopez, L.; Martinelli, M.; Ronconi, L.; Rossi, B. Mental representations derived from navigation: The role of visuo-spatial abilities and working memory. Learn. Individ. Diff. 2016, 49, 314–322. [Google Scholar] [CrossRef]
  17. Parush, A.; Ahuvia, S.; Erev, I. Degradation in Spatial Knowledge Acquisition When Using Automatic Navigation Systems. In Spatial Information Theory. COSIT 2007. Lecture Notes in Computer Science; Winter, S., Duckham, M., Kulik, L., Kuipers, B., Eds.; Springer: Berlin/Heidelberg, Germnay, 2007; Volume 736, pp. 238–254. [Google Scholar]
  18. Siegel, A.W.; White, S.H. The development of spatial representations of large-scale environments. Adv. Child Dev. Behav. 1975, 10, 9–55. [Google Scholar] [CrossRef]
  19. Montello, D.R. Navigation. In The Cambridge Handbook of Visuospatial Thinking; Shah, A.M.P., Ed.; Cambridge Handbooks in Psychology; Cambridge University Press: Cambridge, MA, USA, 2005; pp. 257–294. [Google Scholar]
  20. Rehrl, K.; Hausler, E.; Leitinger, S.; Bell, D. Pedestrian navigation with augmented reality, voice and digital map: Final results from an in situ field study assessing performance and user experience. J. Locat. Based Serv. 2014, 8, 75–96. [Google Scholar] [CrossRef]
  21. Walther-Franks, B.; Malaka, R. Evaluation of an augmented photograph-based pedestrian navigation system. In Proceedings of the International Symposium on Smart Graphics, Rennes, France, 27–29 August 2008; Springer: Cham, Switzerland, 2008; pp. 94–105. [Google Scholar] [CrossRef]
  22. Guedira, Y.; Kolski, C.; Lepreux, S. Pedestrian Navigation through Pictograms and Landmark Photos on Smart Glasses: A Pilot Study. In Proceedings of the 19th International Conference on Human-Computer Interaction, RoCHI 2022, Craiova, Romania, 6–7 October 2022; Popescu, P., Kolski, C., Eds.; Matrix Rom: Bucharest, Romania, 2022; pp. 13–20. [Google Scholar]
  23. Wen, J.; Helton, W.S.; Billinghurst, M. A study of user perception, interface performance, and actual usage of mobile pedestrian navigation aides. Proc. Hum. Factors Ergon. Soc. 2013, 57, 1958–1962. [Google Scholar] [CrossRef]
  24. Brugger, A.; Richter, K.F.; Fabrikant, S.I. How does navigation system behavior influence human behavior? Cogn. Res. Princ. Implic. 2019, 4, 5. [Google Scholar] [CrossRef] [Green Version]
  25. Amirian, P.; Basiri, A. Landmark-based pedestrian navigation using augmented reality and machine learning. In Progress in Cartography; Springer: Cham, Switzerland, 2016; pp. 451–465. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Nakajima, T. Exploring 3D Landmark-Based Map Interface in AR Navigation System for City Exploration. In Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia (MUM’21), Leuven, Belgium, 5–8 December 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 220–222. [Google Scholar] [CrossRef]
  27. Lu, J.; Han, Y.; Xin, Y.; Yue, K.; Liu, Y. Possibilities for Designing Enhancing Spatial Knowledge Acquirements Navigator: A User Study on the Role of Different Contributors in Impairing Human Spatial Memory During Navigation. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA ’21), Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  28. Liu, B.; Ding, L.; Meng, L. Spatial knowledge acquisition with virtual semantic landmarks in mixed reality-based indoor navigation. Cartogr. Geogr. Inf. Sci. 2021, 48, 305–319. [Google Scholar] [CrossRef]
  29. Brugger, A.; Richter, K.F.; Fabrikant, S.I. Distributing attention between environment and navigation system to increase spatial knowledge acquisition during assisted wayfinding. In Proceedings of the International Conference on Spatial Information Theory, Regensburg, Germany, 9–13 September 2018; Number 198809. pp. 19–22. [Google Scholar] [CrossRef]
  30. Huang, H.; Schmidt, M.; Gartner, G. Spatial Knowledge Acquisition with Mobile Maps, Augmented Reality and Voice in the Context of GPS-based Pedestrian Navigation: Results from a Field Test. Cartogr. Geogr. Inf. Sci. 2012, 39, 107–116. [Google Scholar] [CrossRef] [Green Version]
  31. Kamilakis, M.; Gavalas, D.; Zaroliagis, C. Mobile User Experience in Augmented Reality vs. Maps Interfaces: A Case Study in Public Transportation. In Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Lecce, Italy, 15–18 June 2016; Springer: Cham, Switzerland, 2016; pp. 388–396. [Google Scholar] [CrossRef]
  32. Wenig, N.; Wenig, D.; Ernst, S.; Malaka, R.; Hecht, B.; Schoning, J. Pharos: Improving navigation instructions on smartwatches by including global landmarks. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI—ACM 2017, Vienna, Austria, 4–7 September 2017; pp. 1–13. [Google Scholar] [CrossRef] [Green Version]
  33. Tversky, B. Distortions in cognitive maps. Geoforum 1992, 23, 131–138. [Google Scholar] [CrossRef]
  34. Wood, D. How maps work. Cartogr. Int. J. Geogr. Inf. Geovisualization 1992, 29, 66–74. [Google Scholar] [CrossRef]
  35. Coors, V.; Elting, C.; Kray, C.; Laakso, K. Presenting Route Instructions on Mobile Devices: From Textual Directions to 3D Visualization. Explor. Geovis. 2005, 1, 529–550. [Google Scholar] [CrossRef]
  36. Gabbard, J.L.; Swan, J.E.; Hix, D. The effects of text drawing styles, background textures, and natural lighting on text legibility in outdoor augmented reality. Presence Teleoperators Virtual Environ. 2006, 15, 16–32. [Google Scholar] [CrossRef]
  37. Rzayev, R.; Woźniak, P.W.; Dingler, T.; Henze, N. Reading on smart glasses: The effect of text position, presentation type and walking. In Proceedings of the Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, 21–26 April 2018; ACM: New York, NY, USA, 2018; pp. 1–9. [Google Scholar]
  38. Lakehal, A.; Lepreux, S.; Efstratiou, C.; Kolski, C.; Nicolaou, P. Investigating Smartphones and AR Glasses for Pedestrian Navigation and their Effects in Spatial Knowledge Acquisition. In Proceedings of the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’20), Oldenburg, Germany, 5–8 October 2020; ACM: New York, NY, USA, 2021; pp. 1–7. [Google Scholar]
  39. Hegarty, M.; Richardson, A.E.; Montello, D.R.; Lovelace, K.; Subbiah, I. Development of a self-report measure of environmental spatial ability. J. Intell. 2002, 30, 425–447. [Google Scholar] [CrossRef]
  40. Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
  41. Brooke, J.; Jordan, P.W.; Thomas, B.; Weerdmeester, B.A.; McClelland, I.L. Usability Evaluation in Industry; Taylor and Francis: New York, NY, USA, 1996; pp. 189–194. [Google Scholar]
  42. Rehrl, K.; Häusler, E.; Steinmann, R.; Leitinger, S.; Bell, D.; Weber, M. Pedestrian navigation with augmented reality, voice and digital map: Results from a field study assessing performance and user experience. In Advances in Location-Based Services. Lecture Notes in Geoinformation and Cartography; Springer: Cham, Switzerland, 2012; pp. 3–20. [Google Scholar] [CrossRef]
  43. Millonig, A.; Schechtner, K. Developing landmark-based pedestrian-navigation systems. IEEE Trans. Intell. Transp. Syst. 2007, 8, 43–49. [Google Scholar] [CrossRef]
  44. Afrooz, A.; White, D.; Parolin, B. Effects of active and passive exploration of the built environment on memory during wayfinding. Appl. Geogr. 2018, 101, 68–74. [Google Scholar] [CrossRef]
  45. Emo, B.; Silva, J.P.; Javadi, A.H.; Howard, L.; Spiers, H.J. How spatial properties of a city street network influence brain activity during navigation. In Proceedings of the the Spatial Cognition 2014, Bremen, Germany, 15–19 September 2014. [Google Scholar]
  46. Lakehal, A. User-Centred Design and Evaluation of Interactive System Assisting the Mobility of People with Intellectual Disability. Ph.D. Thesis, Université Polytechnique Hauts-de-France, Valenciennes, France, 2022. [Google Scholar]
  47. Velázquez, R.; Pissaloux, E.; Rodrigo, P.; Carrasco, M.; Giannoccaro, N.I.; Lay-Ekuakille, A. An outdoor navigation system for blind pedestrians using GPS and tactile-foot feedback. Appl. Sci. 2018, 8, 578. [Google Scholar] [CrossRef] [Green Version]
  48. Al-Khalifa, S.; Al-Razgan, M.S. Ebsar: Indoor guidance for the visually impaired. Comput. Electr. Eng. 2016, 54, 26–39. [Google Scholar] [CrossRef]
  49. Szucs, V.; Guzsvinecz, T.; Magyar, A. Improved algorithms for movement pattern recognition and classification in physical rehabilitation. In Proceedings of the 10th IEEE International Conference on Cognitive Infocommunications, CogInfoCom 2019, Naples, Italy, 23–25 October 2019; pp. 417–424. [Google Scholar] [CrossRef]
  50. Elgendy, M.; Sik-Lányi, C.; Kelemen, A. A Novel Marker Detection System for People with Visual Impairment Using the Improved Tiny-YOLOv3 Model. Comput. Methods Programs Biomed. 2021, 205, 106112. [Google Scholar] [CrossRef]
  51. Courbois, Y.; Blades, M.; Farran, E.K.; Sockeel, P. Do individuals with intellectual disability select appropriate objects as landmarks when learning a new route? J. Intellect. Disabil. Res. 2013, 57, 80–89. [Google Scholar] [CrossRef]
  52. Letalle, L.; Lakehal, A.; Mengue-Topio, H.; Saint-Mars, J.; Kolski, C.; Lepreux, S.; Anceaux, F. Ontology for Mobility of People with Intellectual Disability: Building a Basis of Definitions for the Development of Navigation Aid Systems. In Proceedings of the HCI in Mobility, Transport, and Automotive Systems. Automated Driving and In-Vehicle Experience Design. HCII; Springer: Cham, Switzerland, 2020; Volume 12212 LNCS, pp. 322–334. [Google Scholar] [CrossRef]
Figure 1. Footpath between the residential buildings of the study area.
Figure 1. Footpath between the residential buildings of the study area.
Information 14 00353 g001
Figure 2. Residential buildings.
Figure 2. Residential buildings.
Information 14 00353 g002
Figure 3. Highlighted landmarks in the targeted residential area.
Figure 3. Highlighted landmarks in the targeted residential area.
Information 14 00353 g003
Figure 4. Selected path with different landmarks and guidance instructions.
Figure 4. Selected path with different landmarks and guidance instructions.
Information 14 00353 g004
Figure 5. Mobile phone application displaying the instruction: “Turn Left after the Phone Booth in 20 meters”.
Figure 5. Mobile phone application displaying the instruction: “Turn Left after the Phone Booth in 20 meters”.
Information 14 00353 g005
Figure 6. AR glasses application displaying the instruction: “Turn Right when you see the Bus Stop in 30 meters”.
Figure 6. AR glasses application displaying the instruction: “Turn Right when you see the Bus Stop in 30 meters”.
Information 14 00353 g006
Figure 7. Decision point illustrating a landmark (Bus Stop).
Figure 7. Decision point illustrating a landmark (Bus Stop).
Information 14 00353 g007
Figure 8. Sequence of instructions displayed when the user is approaching a decision point.
Figure 8. Sequence of instructions displayed when the user is approaching a decision point.
Information 14 00353 g008
Figure 9. Activity diagram of the protocol.
Figure 9. Activity diagram of the protocol.
Information 14 00353 g009
Figure 10. Memory test 1 for participant SP8.
Figure 10. Memory test 1 for participant SP8.
Information 14 00353 g010
Figure 11. Memorisation score analysis (landmarks).
Figure 11. Memorisation score analysis (landmarks).
Information 14 00353 g011
Figure 12. Memorisation score analysis (path segments).
Figure 12. Memorisation score analysis (path segments).
Information 14 00353 g012
Figure 13. Memorisation score analysis (landmarks + path segments).
Figure 13. Memorisation score analysis (landmarks + path segments).
Information 14 00353 g013
Table 1. The SBSOD items [39].
Table 1. The SBSOD items [39].
NoItem
1I am very good at giving directions.
2I have a poor memory for where I left things.
3I am very good at judging distances.
4My “sense of direction” is very good.
5I tend to think of my environment in terms of cardinal directions (N, S, E, W).
6I very easily get lost in a new city.
7I enjoy reading maps.
8I have trouble understanding directions.
9I am very good at reading maps.
10I don’t remember routes very well while riding as a passenger in a car.
11I don’t enjoy giving directions.
12It’s not important to me to know where I am.
13I usually let someone else do the navigational planning for long trips.
14I can usually remember a new route after I have traveled it only once.
15I don’t have a very good “mental map” of my environment.
Table 2. The SBSOD score for the ARG group.
Table 2. The SBSOD score for the ARG group.
ParticipantARG1ARG2ARG3ARG4ARG5ARG6ARG7ARG8ARG9ARG10
SBSOD81636776835772626564
Due to technical issues, the ARG8 participant did not manage to complete the memorisation task, and in the current circumstances, it was impossible to add a new participant. With the statistical Welch’s t test, it was possible to proceed with the analysis despite having two different groups in terms of size.
Table 3. The SBSOD score for the SP group.
Table 3. The SBSOD score for the SP group.
ParticipantSP1SP2SP3SP4SP5SP6SP7SP8SP9SP10
SBSOD68648065585747446567
Table 4. System Usability Scale (SUS) items [41].
Table 4. System Usability Scale (SUS) items [41].
NoItem
1I think that I would like to use this system frequently.
2I found the system unnecessarily complex.
3I thought the system was easy to use.
4I think that I would need the support of a technical person to be able to use this system.
5I found the various functions in this system were well integrated.
6I thought there was too much inconsistency in this system.
7I would imagine that most people would learn to use this system very quickly.
8I found the system very cumbersome to use.
9I felt very confident using the system.
10I needed to learn a lot of things before I could get going with this system.
Table 5. The SUS score summary for the ARG and SP groups.
Table 5. The SUS score summary for the ARG and SP groups.
GroupARG
(Before)
ARG
(After)
SP
(Before)
SP
(After)
M66.2573.5068.2575.25
Med62.5076.2567.5077.50
Min32.5037.5052.5050.00
Max92.5092.5092.5092.50
SD21.8716.729.9311.81
Table 6. Evaluation of the memorisation of landmarks (Participant SP8).
Table 6. Evaluation of the memorisation of landmarks (Participant SP8).
LandmarkFootpathZebra CrossingPhone BoothBus StopWoody’s BarZebra CrossingBishopden CourtMain RoadHemsdell Court
NotationL1L2L3L4L5L6L7L8L9
M. Test 1020121000
M. Test 2020120000
M. Test 3020110000
Legend: 0: not located at all, 1: incorrectly located, 2: correctly located.
Table 7. Memory tests: landmarks results for the ARG group (the score is in the range of 0 to 18).
Table 7. Memory tests: landmarks results for the ARG group (the score is in the range of 0 to 18).
ParticipantARG1ARG2ARG3ARG4ARG5ARG6ARG7ARG8ARG9ARG10
ARG M. Test 181291011125NA127
ARG M. Test 2813891184NA106
ARG M. Test 3913881327NA1210
Due to technical issues, The ARG8 participant did not manage to complete the memorisation task, and in the current circumstances, it was impossible to add a new participant. With the statistical Welch’s t test, it was possible to proceed with the analysis despite having two different groups in term of size.
Table 8. Memory tests: landmarks results for the SP group (the score is in the range of 0 to 18).
Table 8. Memory tests: landmarks results for the SP group (the score is in the range of 0 to 18).
ParticipantSP1SP2SP3SP4SP5SP6SP7SP8SP9SP10
SP M. Test 11067412876211
SP M. Test 26777689515
SP M. Test 3672761011438
Table 9. Evaluation of segment memorisation (Participant SP8).
Table 9. Evaluation of segment memorisation (Participant SP8).
SegmentDeparture - L1L1 - L2L2 - L3L3 - L4L4 - L5L5 - L6L6 - L7L7 - L8L8 - L9Score MS_S
M. Test 11111110006
M. Test 21111111007
M. Test 31110111006
Table 10. Memory tests: segment results for the ARG group (the score is in the range of 0 to 9).
Table 10. Memory tests: segment results for the ARG group (the score is in the range of 0 to 9).
ParticipantARG1ARG2ARG3ARG4ARG5ARG6ARG7ARG8ARG9ARG10
ARG M. Test 18987993nan97
ARG M. Test 29997993nan98
ARG M. Test 39997995nan99
Table 11. Memory tests: segments results for the SP group (the score is in the range of 0 to 9).
Table 11. Memory tests: segments results for the SP group (the score is in the range of 0 to 9).
ParticipantSP1SP2SP3SP4SP5SP6SP7SP8SP9SP10
SP M. Test 17644785678
SP M. Test 27644787748
SP M. Test 37665787689
Table 12. Memory tests: global score results for the ARG group (the score is on a scale out of 10).
Table 12. Memory tests: global score results for the ARG group (the score is on a scale out of 10).
ParticipantARG1ARG2ARG3ARG4ARG5ARG6ARG7ARG8ARG9ARG10
ARG M. Test 16.678.336.946.678.068.333.06nan8.335.83
ARG M. Test 27.228.617.226.398.067.222.78nan7.786.11
ARG M. Test 37.58.617.226.118.615.564.72nan8.337.78
Table 13. Memory tests: global score results for the SP group (the score is on a scale out of 10).
Table 13. Memory tests: global score results for the SP group (the score is on a scale out of 10).
ParticipantSP1SP2SP3SP4SP5SP6SP7SP8SP9SP10
SP M. Test 16.675.04.173.337.226.674.725.04.447.5
SP M. Test 25.565.284.174.175.566.676.395.282.55.83
SP M. Test 35.565.283.894.725.567.226.944.445.287.22
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lakehal, A.; Lepreux, S.; Efstratiou, C.; Kolski, C.; Nicolaou, P. Spatial Knowledge Acquisition for Pedestrian Navigation: A Comparative Study between Smartphones and AR Glasses. Information 2023, 14, 353. https://doi.org/10.3390/info14070353

AMA Style

Lakehal A, Lepreux S, Efstratiou C, Kolski C, Nicolaou P. Spatial Knowledge Acquisition for Pedestrian Navigation: A Comparative Study between Smartphones and AR Glasses. Information. 2023; 14(7):353. https://doi.org/10.3390/info14070353

Chicago/Turabian Style

Lakehal, Aymen, Sophie Lepreux, Christos Efstratiou, Christophe Kolski, and Pavlos Nicolaou. 2023. "Spatial Knowledge Acquisition for Pedestrian Navigation: A Comparative Study between Smartphones and AR Glasses" Information 14, no. 7: 353. https://doi.org/10.3390/info14070353

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop