Next Article in Journal
ECG Monitoring during End of Life Care: Implications on Alarm Fatigue
Next Article in Special Issue
Tactile Cues for Improving Target Localization in Subjects with Tunnel Vision
Previous Article in Journal
Let’s Play a Game! Kin-LDD: A Tool for Assisting in the Diagnosis of Children with Learning Difficulties
Previous Article in Special Issue
Hungry Cat—A Serious Game for Conveying Spatial Information to the Visually Impaired
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Sensory Wearable Devices to Navigate the City: Effectiveness and User Experience in Older Pedestrians

by
Angélique Montuwy
1,2,*,
Béatrice Cahour
2 and
Aurélie Dommes
1
1
COSYS-LEPSIS, IFSTTAR, 25 allée des marronniers, 78000 Versailles, France
2
I3 Lab, Telecom ParisTech, 46 rue Barrault, 75013 Paris, France
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2019, 3(1), 17; https://doi.org/10.3390/mti3010017
Submission received: 3 December 2018 / Revised: 8 March 2019 / Accepted: 8 March 2019 / Published: 12 March 2019
(This article belongs to the Special Issue Interactive Assistive Technology)

Abstract

:
Preserving older pedestrians’ navigation skills in urban environments is a challenge for maintaining their quality of life. However, the maps that are usually used by older pedestrians might be unsuitable to their specificities and the existing digital aids do not consider older people’s perceptual and cognitive declines or user experience. This study presents a rich description of the navigation experience of older pedestrians either with a visual (augmented reality glasses), auditory (bone conduction headphones), or a visual and haptic (smartwatch) wearable device adapted to age-related declines. These wearable devices are compared to the navigation aid older people usually use when navigating the city (their own digital or paper map). The study, with 18 participants, measured the navigation performance and captured detailed descriptions of the users’ experience using interviews. We highlight three main phenomena which impact the quality of the user experience with the four aids: (1) the shifts in attention over time, (2) the understanding of the situation over time, and (3) the emergence of affective and aesthetic feelings over time. These findings add a new understanding of the specificities of navigation experience by older people and are discussed in terms of design recommendations for navigation devices.

1. Introduction

When advancing in age, older people may face difficulties walking in the city due to physical, but also, perceptual and cognitive declines [1]. Finding one’s way, searching for relevant information in the environment, and crossing streets while being aware of incoming cars are some examples of problems older pedestrians may face [2]. These difficulties can lead the older people to reduce the frequency of their urban travels and can have consequences on their wellbeing and quality of life [3], as pedestrian mobility is one of the main means of daily transportation for older adults in French cities [4]. Spatial navigation, more particularly, may be a source of trouble for older pedestrians. It consists in finding and traveling a route from a point A to a point B [5,6] and involves complex cognitive and perceptual processes such as self-orientation [7], attention to the environment and selection of landmarks [8], use and interpretation of allocentric representations [9], construction of cognitive maps [10], or maintenance of the goal in mind to reach the destination [11]. However, all these processes tend to decline with aging, leading to greater difficulties in navigation among older people.
Paper and digital maps are the media most commonly used to help people with navigation tasks [12,13]. However, maps are not very suitable for older pedestrians, even if they are used to them. Indeed, they provide people with allocentric representations of the environment that are difficult to interpret for older people, who prefer egocentric (first-person view) types of information to help them find their way [9,14]. Maps are also difficult to orient correctly [15] and require a lot of attention to be read [16,17], leading to potentially dangerous situations for older people whose attention resources are limited [18,19]. To guide them in the city, one solution could be to provide older pedestrians with turn-by-turn egocentric instructions that are simple to understand (e.g., turn right at the next intersection [14]). To do so, wearable devices could be an interesting option as they could help reducing attention sharing by providing relevant information directly on the pedestrian’s body. Moreover, wearable devices allow using several sensory modalities to help people perceive navigation instructions, thereby triggering potential benefits for older people whose sensory acuity may decline, while the maps only rely on the visual channel.
In this paper, we investigate the performance and user experience of older pedestrians with four navigation aids used to find their way in the city: (1) the paper or digital map, i.e., an aid people usually use in such circumstances and three sensory wearable devices: (2) augmented-reality glasses (visual instructions); (3) bone conduction headphones (auditory instructions); and (4) a smartwatch (warning haptic vibrations + visual instructions). The main issues raised in this study are as follows. Would these devices be more efficient than a traditional paper map? Would they be well accepted by older pedestrians? Are some of these devices more efficient and appreciated?

2. Related Work

2.1. Sensory Navigation Instructions and Wearable Devices

Several studies have explored the use of digital devices aiming to provide sensory navigation instructions to the pedestrians. These, in particular, use the visual, the auditory, and, more recently, the haptic sensory modalities to convey information to the user, and rely on smartphones, PDA or wearable devices, for example headphones [20], a smartwatch [21], shoes [22], or even a belt [23]. However, only few studies deal with older pedestrians using such sensory navigation instructions and devices.
Among the sensory modalities, vision seems to be the natural one to be used for communicating environmental information [16]. Various types of visual information were investigated in the literature: text [24], 2D-arrows [12], 3D models [24], and photographs [25]. However, those visual instructions might be hard to interpret due to their complexity [24] and the effort required to fit information with the environment [26]. Also, these instructions often require a lot of attention to be perceived, which could be a serious drawback for a use in the context of outdoor navigation in an urban setting. Augmented photographs [25,27] and augmented reality (AR) [28] could therefore be an interesting alternative, as this type of information seems easy to understand in virtual environments [29], even among older people [30], and mostly limit attention sharing [28] by inlaying relevant instructions directly in the environment. One limit, however, could be due to the use of hand-held devices such as smartphones to display information in augmented reality [31], which would require looking at them almost constantly. Augmented-reality glasses might be a relevant option to convey visual instructions, such as arrows that would be directly inlayed in the pedestrians’ field of view. To our knowledge, the device that we will be tested in the present study has not been yet investigated among older pedestrians.
Auditory vocal instructions are a rather common solution to guide pedestrians too, but they might be difficult to perceive for older people in loud outdoor environments such as cities [32] due to changes in the range of frequencies they can hear [33]. Also, vocal instructions can be hard to understand when instructions are complex [34] and often prevent people from having a conversation while walking [35]. Therefore, spatialized sounds could be an efficient alternative that would require less attention sharing than vocal instructions and would allow speaking at the same time [36]. Spatialized sounds also seem easy to interpret [37], even for older people [30], relying on humans’ natural ability to locate sounds in space [38]. However, using in-ear or on-ear headphones to convey spatialized sounds might be problematic, as this prevents hearing sounds from the environment with potential negative impacts on safety [22]. It also prevents having a conversation [20], and these devices are usually not well accepted by older people [39]. More recently, bone conduction headphones have become available. These devices make it possible to hear the sounds from the surroundings as the headphones do not rest inside the ears but vibrations are sent straight into the top of the jaw to produce sounds. They could therefore be a good opportunity to convey spatialized sounds to guide older people in cities in safety. These devices, however, have not yet been tested among older pedestrians.
Finally, vibrating navigation aids have been tested more recently, using a wide variety of devices to provide information to the user [22,23,40,41]. This sensory modality could help reduce attention sharing, as the haptic modality is less used than the auditory and visual modalities for pedestrian navigation. However, translating instructions required to navigate the city into vibrations is still challenging [30,40], especially among older pedestrians whose skin sensitivity may become limited with aging [42]. While using standalone vibrations seems difficult for navigation purposes in comparison to visual or auditory instructions [43], using vibrations as a warning and providing visual or auditory instructions at the same time could be an interesting option to limit attention sharing [30]. Thus, a multisensory device of this kind (e.g., smartwatch) will be tested among older pedestrians in the present study.

2.2. User Experience with Navigation Aids

Older people may feel uncomfortable with new technologies [44] such as sensory navigation aids, even though they could benefit from such devices in their daily life. With those types of devices indeed, worries may rise that are related to health, autonomy, or privacy among older people [45], thereby limiting the acceptability of these devices on the market. That is why considering older pedestrians’ user experience is mandatory for ensuring the acceptance of such sensory wearable devices in the future.
User experience (UX) is a concept expressing the pragmatic and affective processes a specific user goes through when using a service or a product in a specific context. This concept allows for the evaluation of the product and improvement of its design. This human-centered approach aims to investigate the pragmatic qualities of a device, its hedonic qualities, and the emotions elicited by the use of the device in the context [46,47,48]. Several studies dedicated to navigation aids and to older people highlight some dimensions that particularly shape the user experience. Thus, among the pragmatic qualities that are important for navigation aids stand their usability, the accuracy of the GPS, and their ease of learning [49,50], especially among older people [51]. Nonpragmatic qualities, such as aesthetics, trustworthiness, discretion, and autonomy, are relevant too, especially among older people [52,53]. This literature, however, is based on devices such as smartphones or GPS. It does not take into account the specificities of using wearable devices for navigation purposes, which could raise other types of concerns that should be investigated.
Some studies also exist about the acceptability of wearable devices such as AR glasses [54], bone conduction devices [55] and smartwatches [56], but they are performed out of a context of pedestrian navigation. Here, again, complementary investigations will be useful in a navigation context. Moreover, little is known about older people using these types of devices in the city, therefore those aspects will be at the core of this study.

3. Materials and Methods

3.1. Implementation of the Navigation Aids

3.1.1. Wearable Devices Used

In addition to the usual navigation aids people use to find their way in the city (paper maps or navigation applications), three middle-of-the-range sensory wearable devices were proposed to the participants (the choice of the devices was based on their quality/price rate at the time the study was made, according to the professional websites comparing these types of devices. Good quality devices were chosen on this basis.).
  • The Optinvent ORA-2 AR glasses, aiming to provide participants with arrows inlayed in their field of view (Figure 1); they are of very good quality in the context of current technologies (cost 800€).
  • The Sainsonic BM-7 bone conduction headphones, aiming to provide participants with spatialized sounds (Figure 2).
  • The Huawei Watch Active smartwatch, aiming to provide participants with a warning vibration and an arrow displayed on the screen (Figure 3).
The screen of the ORA-2 glasses, on the right eye, displays content as if it were on a 61” screen placed 4 meters in front of the user, with a resolution of about 42 pixels per degree. The screen can be moved 20° horizontally to adapt its position to the user. The glasses do not include glass panes, so that user can use their own glasses underneath the AR glasses. Even though the brightness of the screen was over 3000Cd/m2, sunglasses were added in this study to improve the visibility of the navigation instructions outdoors. These glasses weigh 90g, which is rather light in comparison with other AR devices. An Android 4.4.2 system is included. Bluetooth LE and Wifi connections are available.
Concerning the blue Sainsonic bone conduction headphones we used, it offers a 10-meter-Bluetooth 4.0 connectivity. It weighs 68g. This headset is designed to be used outdoors, so the loudness is enough for outdoor activities (e.g., listening to music while walking) but not too loud, so as to allow hearing ambient noise.
The Huawei smartwatch offers a 1.4” round screen with a resolution of 286 pixels per inch. An Android Wear system is included, and connectivity is allowed through Wifi and Bluetooth 4.0/4.1. The watch weighs 59g. One actuator can be used to make the screen vibrate. The intensity of the vibration was set to maximum in the parameters of the Android Wear system.

3.1.2. Application Design

All three sensory wearable devices were connected via Bluetooth to a Nexus 5 smartphone with a dedicated Android application on it that we specifically developed for the purposes of the present research. This application worked like a server, aiming to compute the location and destination of the user based on the Google Maps APIs and the GPS signal, and sending the corresponding navigation instructions to the wearable device in real-time. To help older people use such an application, we designed it based on literature recommendations: 12-points sans serif fonts and large high contrast icons and buttons were used and useful information was directly displayed on the screen of the smartphone with no need to use menus or to drag between views. When buttons were pressed, a slight vibration feedback was provided, and the screen was set up so as not to fade, as older people usually exhibit slower response times [57,58].
The application was composed of three main screens. On the welcome screen (see Figure 4a), the destination was entered by typing in a text box. A list of predefined destinations used for the study was set up, providing the users with several routes directly encoded in the app (with no need for internet connectivity). On a second screen, the navigation mode could be chosen to be guided using either the AR glasses or the smartwatch for the visual mode, and the bone conduction headphones for the auditory mode (see Figure 4b). On a third screen (see Figure 4c), the route information was given. An overview map showed the path to follow, as providing a map fosters confidence among older people [59]. People could zoom into the map and navigate within it using their fingers. Below the map, a button allowed to replay the last instructions provided. A message indicated that phone, GPS, and device were connected before starting the route.
Both GPS and phone network data were used to improve signal accuracy, in order to locate precisely the pedestrian in the city. However, to deal with GPS microdisconnections that occur in city centers [60], the system was designed to provide navigation instructions within a range of 20 m around the GPS coordinates at which the pedestrian was supposed to turn. This range seemed large enough to ensure the user would receive the navigation instruction, but small enough to guarantee the instructions would be provided before leaving the intersection. In the Android application, the GPS coordinates of the user, microdisconnections of the GPS, and instructions sent to the wearable device were recorded every second in a log file.

3.1.3. Instructions Used

A set of six turn-by-turn instructions were used to guide the pedestrians: turn left into the next street, turn right into the next street, enter a roundabout from the left, enter a roundabout from the right, make a U-turn (when the participant made an error), and stop at the destination. Navigation information in roundabouts was split into two instructions to avoid confusion in complex intersections, as these types of intersections are a source of difficulty for older people [61]. The first information provided to the participants offered that they might circle the roundabout by turning either left or right, and the second information allowed the participants to exit the roundabout, using a single “turn left/turn right” message. By default, when no instruction was provided, participants were instructed to proceed straight ahead. Instruction duration was set to 3 s (with the AR glasses, the bone conduction headphones, and the smartwatch, whenever it is played for the first time or replayed on demand) to limit attention requirements [62] while maintaining a high probability of perception by the participants [63].
The arrows displayed by the AR glasses and by the smartwatch were the same and were based on previous tests in virtual reality [30,64]. Large flat green arrows were used to improve their perceptibility, even among older people suffering from visual impairments [65] (see Figure 5).
Based on previous studies about navigation in virtual reality [30], the auditory instructions were sounds presented to the left/right ear, ranging from 500 Hz to 1 kHz frequencies to facilitate their perceptibility among older people [33]. Sonar pulses sounds starting from 500 Hz to reach 1 kHz after 3 s were used to facilitate sound localization [38] when turning to the left or to the right. A sound of 500 Hz moving from the left ear to the right ear and vice versa was used for the U-turn instruction. A composition of sounds between 500 Hz and 1 kHz was used for the stop instruction. These sounds were easily heard by all the participants.

3.2. Methodology for Data Collection

3.2.1. Population

The present field study was conducted with 18 participants (10 men, 8 women) aged 60 to 77 years (M = 68.7, SD = 5.1). Participants were recruited based on the following criteria: being retired (for an homogenization of the participants’ types of activities), being able to walk 30 minutes without any assistance, and owning a smartphone and being used to interact with it. The participants were used to navigate in urban environments by themselves and usually used maps and/or GPS to find their way. All participants wore glasses or lenses, and two were partially deaf and wore no hearing aid.
All the participants were asked for their consent prior to the study. Participants received a 30€ voucher as compensation for their time and transport expenses.

3.2.2. Navigation Task

During the study, four routes were navigated: one route with the navigation aid that the participant used most often as a pedestrian (a paper map and a navigation application on smartphone) and three routes with each of the wearable devices. The first route, from the meeting point to the research lab, was always navigated with the usual aid they had chosen. Because they used their usual aid (digital or paper maps), not counterbalancing this route had no impact on any possible learning effects. The other three routes were navigated with the wearable devices (see Figure 6a–d). The first condition aimed to collect data about the usual behavior of the participants prior to the use of the wearable devices. Each route was 1 km long and was located in a rather quiet area of Paris, without too much risk. For the route navigated with the usual aid, from the meeting point to the research lab, the route included one roundabout (two instructions), two left turns, two right turns, and one stop instruction. For the three routes navigated with the wearable aids, the route included one roundabout (two instructions), three left turns, three right turns, and one stop instruction. The first route navigated with the usual aid included two less turns (one left and one right), but all the four routes were very similar (1 km long) and took place in the neighborhood of the research lab, where postnavigation interviews were conducted after each of the routes. The whole study lasted about four hours for each participant.
It is important in general that the different aids are not tested always in the same order, because of possible learning effects. Here a first step of the study was carried out with three conditions: the usual map, the AR glasses and the headphones. The usual map was always tested on the first place because it was not new for the participants and we counterbalanced the use of the two new aids for the second and third route. Since the results with the glasses were rather negative, we added the smartwatch condition six months later (same participants), to see if another visual device gave better results. Then the smartwatch test came always in the fourth place; after six months, it is probable that the participants have partly forgotten the previous tests and that the learning effect is light. It is probably light also because the types of messages and devices are very different with each aid; the participants can just possibly learn that there are six messages.
Before each route with one of the wearable devices, an indoor training session took place in the research lab. The session consisted in the presentation of the instructions to the participants with the related device. The participant had to learn the navigation instructions (which were repeated until all of them were properly recognized) and, depending on the device, the AR screen position, the sound level, or the position of the smartwatch around the wrist was adjusted. Then, for each of the four routes, participants were invited to enter their destination on the smartphone or, for the first route, to search for it with their own map. In that last case, participants were asked to select the shortest route from the meeting point to the research lab and the experimenter checked that all the participants chose the same one (see Figure 6a). For each of the four routes, participants were invited to take their time to look at the route if required before leaving, and a photograph of the destination was provided.
For each of the wearable devices, participants were instructed they could press the replay-the-last-instruction button or look at the map whenever they wanted along the route, just as they would do with their own map, and they were also provided with information about how the system worked (going ahead until they get an instruction, turning into the next street each time they perceived an instruction, getting the instruction in a 20-meter area). Along the way, the participants were closely followed by the experimenter who ensured their safety. The experimenter kept the phone in his/her hand to ensure the application was working fine but could give it to the participant at any time. After each change of direction, the experimenter asked them about what was going on just before and ensured that the participant was fine. Participants were free to talk with the experimenter along the route as long as they kept walking. Both the participant and the experimenter wore a small camera to capture the navigation activity.
Data collected during the navigation tasks were the number of repetitions required to learn the instructions before navigating the city, the time needed to reach the destination, the number of errors the participants made before reaching destination, the number of time the replay button of the app was pressed, the number of time the GPS signal was lost, and the videos of the pedestrians’ behavior (to analyze the risky instances of inattention).

3.2.3. Explicitation Interview

After each route, the user experience with the navigation aid was investigated using the Explicitation (sometimes called elicitation—“Explicitation” is the original French word. In English, both “explicitation” and “elicitation” are used) Interview Technique [66]. The Explicitation Interview Technique encourages participants to talk about the cognitive, perceptual, sensory, and affective aspects of what they experienced during the route [67] without giving too many rational explanations [68]. In this type of interview, questions are related to both the synchronic (e.g., “How was it at this moment?”) and diachronic (e.g., “What did you perceive next?”) dimensions of the experience. The interviewer, without inducing any content, helps the participant focus on the experience he/she lived. The Explicitation Interview Technique was successfully used in previous HCI studies aiming at understanding characteristics of the user experience [69,70,71].
At the end of each interview, after participants had returned to the research Lab, some questions dedicated to the previously identified main dimensions impacting the UX of older people with navigation aids were also asked, if not spontaneously addressed during the explicitation interview. These short questions were related to the outdoor perceptibility of the instructions, mental workload and attention required to perceive the instructions, the attention to the surroundings, the feeling of safety, the ease-of-learning of the instructions, their understandability in an urban context, the trustworthiness of the device, the feeling of autonomy of the participant, the discretion and aesthetics of the device, the emotions related to the use of the navigation aid, its perceived usefulness, and the expectations of the participant for the future.
The interviews were recorded with the consent of the participants and then fully transcribed to be analyzed.

3.3. Methodology for Data Analysis

3.3.1. Efficiency Analysis

Three variables were measured: the time required for the route, the percentage of success at intersections, and the number of time the instructions were replayed. The time needed to reach the destination (including the time spent looking at the digital or paper map before leaving and while navigating) was compared to the time to destination calculated by Google Maps. The percentage increase was calculated on this basis. Percentage of success was also calculated based on the number of times when the participant did or did not followed the instructions provided by the wearable device or his/her personal and usual aid. Roundabouts and simple intersections were considered separately. For the three wearable devices, the number of times the replay button was pressed was also taken into account.
Due to a rather low number of participants and missing normal curve of distribution for the percentages of success, nonparametric Friedman and Wilcoxon matched pairs signed-rank tests were carried out on these measures to investigate potential significant differences.

3.3.2. UX Analysis

A hybrid thematic analysis [72] of the interviews’ transcripts was made, based on 10 predefined dimensions that were supposed to impact the user experience (see Table 1). These dimensions were based on two sources: the literature dedicated to the UX with navigation aids and the UX of older people using technologies and a preanalysis of the interviews’ content which helped adapting some of these dimensions. The evolution of the UX in time was also considered during this analysis.
After a first reading of the whole transcripts, keywords were chosen to describe the content of each dimension impacting the UX in the interviews. Each verbatim was then coded with the 10 predefined dimensions. A second researcher coded some parts of the interviews and the inter-coder reliability was fair (Cohen’s Kappa k > 0.65) [73]. The dimensions impacting UX and their temporal evolution were discussed by the two coders, ending up with three main phenomena reflecting the quality of the navigation experience with the four navigation aids: the shifts in attention over time, the understanding of the situation over time, and the emergence of affective and aesthetic feelings over time. Two supplementary categories were added as they were not directly related to the navigation experience and more prospective evaluations: perceived usefulness and participants’ expectations (see Table 1).
Finally, the keywords for the 10 dimensions were analyzed quantitatively: each keyword was coded as rather positive or negative, and summed up for each participant, each dimension, and each navigation aid. On this basis, percentages of satisfied and unsatisfied participants according to each dimension were calculated and Chi2 analyses were carried out.

3.3.3. Risky Inattention Analysis

The videos of the routes were watched extensively to point out when the participants were obviously inattentive (stopped in the middle of the street, collided with something or someone, or crossed the street without paying attention to the traffic). To determine if the navigation aid was the cause of these risky observed inattentions and, thus, if the use of navigation aids could be detrimental to the mobility and safety of the participants, two categories were defined: (i) inattentions due to the navigation aid (e.g., colliding with someone or something while looking at the paper map or the smartwatch) and (ii) inattentions due to the urban environment (for instance paying attention to a store, to flowers, etc., instead of the incoming cars while crossing a street) or to the interaction with the experimenter. We focused on the inattentions due to the navigation aids. Chi2 analyses were made on this basis.

4. Results

4.1. Efficiency of the Navigation Aids

Friedman tests highlighted a significant difference in time needed to reach the destination (p < 0.00001). The time required to navigate the route was significantly longer with the personal navigation aid (paper map for 16 participants, smartphone with digital map application for 2 participants, the maps providing allocentric information—no turn by turn instructions) (Mean = +47.6%; SD = 27.2) than with the three wearable aids: AR glasses (M = +27.1%; SD = 16.8; Z = 3.11, p < 0.005), the bone conduction headphones (M = +19.9%; SD = 15.4; Z = 3.72, p < 0.0005), and the smartwatch (M = +18%; SD = 11.4; Z = 3.72, p < 0.005). When using their personal map, some participants started the route without even looking at it, while some others studied it in detail before leaving, explaining the high variability among participants (see Figure 7). There were no significant differences between the three wearable aids.
Friedman tests also highlighted a significant difference in the percentage of success at simple intersections (p < 0.001). The percentage of success was significantly lower with AR glasses (Mean = 82.5%; SD = 18) than with the usual navigation aid (M = 94.4%; SD = 11.5; Z = 2.54, p < 0.05), the bone conduction headphones (M = 96%; SD = 6.6; Z = 2.67, p < 0.01), and the smartwatch (M = 96.8%; SD = 6.1; Z = 3.11, p < 0.005) (see Figure 8). There were no significant differences between the bone conduction headphones, the smartwatch, and the usual aid.
A marginally significant difference between the navigation aids was also observed in roundabouts (Friedman test p = 0.05). Just as with simple intersections, the percentage of success was lower with AR glasses (M = 86.1%; SD = 28.7) than with the usual aid (M = 97.2%; SD = 11.8), the bone conduction headphones (M = 97.2%; SD = 11.8), and the smartwatch (M = 100%; SD = 0). There were no significant differences between the bone conduction headphones, the smartwatch, and the usual aid.
Finally, Friedman tests highlighted significant differences in the number of times the replay button was pressed (p < 0.005). Instructions were more often replayed with the AR glasses (M = 7.7; SD = 8.2) than with the bone conduction headphones (M = 0.8; SD = 1.1; Z = 3.41, p < 0.001) (see Figure 9), while there was no significant difference between the AR glasses and the smartwatch, and between the bone conduction headphones and the smartwatch.

4.2. User Experience with the Navigation Aids

A qualitative analysis of the user experience will be developed first, according to the four global phenomena identified previously (shift in attention, understanding the situation, affective and aesthetic feelings, and prospective evaluations). This will be followed by a quantitative synthesis of the user experience for each type of aid.

4.2.1. Shifts in Attention

Attention sharing emerged as a key element to understand older pedestrians’ navigation experiences. The level of attention required to search for and perceive the navigation instructions with the four navigation aids when walking outdoors did indeed have some consequences on both the level of attention allocated to the environment and the participants’ feeling of being safe.
The attention allocated to the AR glasses in order to perceive the navigation instructions was more substantial than the attention allocated to the usual aid, the bone conduction headphones, and the smartwatch, even though the instructions were directly provided in the field of view with the AR glasses, unlike the participants’ map and the smartwatch. While only eight participants expressed the fact that they had trouble perceiving the navigation instructions on their own map (that were “too small”), two with the bone conduction headphones (participants who were partially deaf did not experience difficulty with the bone conduction headphones), and four with the smartwatch, 13 expressed difficulties perceiving information with the AR glasses, leading to a higher level of attention to the instructions and a higher workload with the latter device. The instructions provided by the AR glasses were indeed described as less perceptible than the instructions provided by the 3 other navigation aids. The arrows were “too faded” (according to seven participants), “too brief” (12 participants), or even incomplete in the field of view (five participants), as explained by one of the participants: “It was almost invisible, like a fly in my field of view.” To deal with these perception issues and the related higher attentional workload, 10 participants adopted strategies to better see the AR arrows over time, consisting in focusing one eye on the AR screen, thereby reducing the visual attention to the surroundings.
Some participants also had difficulty perceiving the warning vibration of the smartwatch (four out of 18) but not the arrows displayed on it. Hence, they kept their arm close to their chest while walking to ensure they would hear the vibration of the smartwatch or see the screen turning on to display the navigation arrow. However, these strategies did not require a high level of visual attention toward the device, unlike the AR glasses.
Consequently, 11 participants declared they were more able to look at the surroundings with the usual aid, the bone conduction headphones, and the smartwatch than with the AR glasses, despite the fact that the usual aid and the smartwatch required users to look at them from time to time. With the AR glasses, participants tended to look at their feet (for seven of them) or far away in front of them to increase the likeliness they would see the arrows. They also had difficulty paying attention to the buildings and people around them (seven participants) because of a reduced field of view: “I could see 60° on the left side, and maybe 30°, or less, on the right side” said a participant.
With the smartwatch, the headphones and the usual aid, the vocabulary used by the participants to describe their experiences was mostly related to taking a walk or even daydreaming, while many participants expressed difficulty in enjoying the landscape or engaging in social interactions while wearing the AR glasses: “Once we were chatting, I thought to myself I should stop chatting, stay focused on the screen because an arrow might appear.” Safety concerns were more frequent with the AR glasses than with the three other navigation aids, thereby leading to a higher feeling of lack of safety with the AR glasses. Fourteen participants felt unsafe when navigating with the AR glasses, compared to four participants with the usual aid, four with the bone conduction headphones, and none with the smartwatch. Once again, participants adopted strategies to improve their feeling of safety over time, such as walking slowly or being very cautious: “Without the glasses, I would have crossed the road rapidly. But I was afraid not being 100% attentive, so I preferred being cautious.” Some participants also mentioned that they acted more promptly (automatically) with the three wearable devices than with their usual aid because the instructions did not require specific concentration when using the smartwatch or the bone conduction headphones (two participants, e.g., “It was an instinctive reaction: turn here, right now”), or because they were afraid of forgetting the instruction due to their high mental workload when using the AR glasses (three participants).

4.2.2. Understanding the Situation

Participants found it easy to learn and understand the visual and auditory instructions before leaving the research lab (out of the context of navigation), as a participant explained: “It was almost like a part of me, of my own body. I interpreted what I saw immediately. And it was the same for the sounds I heard.” Similarly, the modes of operation of the three wearable navigation devices were quickly understood concerning the use of two messages in the roundabouts (one for entering and one for exiting), as well as going ahead while no message was provided.
In the context of navigation, the number of times the participants reported having a doubt about where to go was lower overall with the three wearable devices than with their own usual map. Indeed, 11 participants experienced at least one difficulty in finding their way with their own aid, while there were only eight with the AR glasses, two with the bone conduction headphones, and three with the smartwatch.
The concerns with the usual aid were mainly related to a mismatch between the map and the environment: participants located themselves somewhere else on the map than their actual location in the real environment or read the map upside down.
The doubts reported with the wearable devices (mostly the AR glasses) were often related to the instruction temporality (its precision in time), with instructions being provided too early or too late to be properly understood: “With the glasses, it was more unsteady [than with the headphones], it changed a lot along the time,” explained a participant, leading to greater feelings of disorientation when using the AR glasses. Unsteady and late/early instructions were related to doubt and hasty actions. This kind of doubt was not mentioned when using the usual digital or paper maps.
Some mismatches between the system’s rules of operation and participants’ expectancies also caused doubts when using the wearable devices, especially in areas that were perceived as roundabouts by the participants but not by the system: “I thought it was a roundabout even if I had no entering roundabout message. I turn right and then I was waiting for a new message to enter the street,” said a participant. Some participants also interpreted the instructions to turn into a street as a street-crossing instruction, leading to ambiguous situations: “When I heard the sound to turn left, there was a crosswalk close to me. I turned and then I faced a wall. I stopped because the rule was to go ahead without any message but here it was impossible”. None of these types of doubts were reported during the navigation with the participants’ usual map.
However, contrary to their usual paper map, the wearable devices also provided the participants with clues to help find their way, such as the U-turn instructions, which were found to be really reassuring: “I know that if I make an error, the system will tell me to make a U-turn, which is a great guarantee,” explained a participant. The participants were also helped by the system’s rules to interpret the situation. For example, the participants knew they had to always turn into the first street after an instruction, or to go straight ahead when no instruction was provided: “The application told me to turn left. It was a bit late, but I knew it was always the first street, so I turned,” said a participant. Interestingly, with the three wearable devices, only one participant looked at the map on the smartphone to help her find her way with the wearable AR glasses, while, during the first route with their own digital or paper map, most of the participants were regularly looking at their map and trying to find clues in the environment. It indicates two very different ways of navigating the routes.

4.2.3. Affective and Aesthetic Feelings

Over time, the navigation experiences were shaped by the level of trust and autonomy in the navigation aid, feelings of comfort and discretion with the wearable device, and the other emotions of the participants (e.g., shame and guilt). Because of the familiar character of their usual aid, these dimensions were almost absent in the interviews related to the participants’ own map. This section is therefore dedicated to the three wearable devices used in this study.
Overall, the bone conduction headphones and the smartwatch were associated with positive feelings that remained steady over time. The feeling of trust in the device was high for 14 participants with the bone conduction headphones and 16 participants with the smartwatch. The trust in the device seemed to be shaped by three main elements: (1) a good perception of the navigation instructions and the attention required: “I know I will hear the sounds, so I fully trust the device,” said a participant with the bone conduction headphones; (2) the accuracy of the U-turn instructions when having a doubt: “the U-turn was reassuring because I knew that if I made a mistake, I would get it before the next street”; and (3) the feeling of saving some time when using the aid: “When I got things wrong, I walked 20 m. It was a short distance, and it was reassuring, because I dislike being late.” Concerning the physical pleasure and aesthetic dimension, these two wearable devices were also associated with both physical pleasure and discretion (according to 17 participants for the bone conduction headphones and 18 participants for the smartwatch). The weight, the steadiness, and painlessness appeared to be the main criteria for physical pleasantness, as explained by one participant: “Headphones are surprisingly pleasant. No weight, no pressure on the temporal bone. It was almost like I wore nothing.” The smartwatch as well as the bone conduction headphones were valued and appreciated as regular devices that were not noticeable in the street, aesthetically discreet: “The discretion is total, people do not notice you are wearing an aid,” said a participant about the smartwatch, and therefore these devices were likely to be pleasantly used outdoors. During the study, 13 participants noted that they had enjoyed using these two devices to find their way.
The AR glasses led to more contrasted feelings and emotions, which did not remain constant over time for most of the participants. The feeling of trust tended to decline over time for 11 participants when using the AR glasses: “When I started with the glasses, I was relaxed, just as I was with the headphones. But I got worried quickly,” explained a participant. The feeling of self-efficacy was highly correlated to the perception and understanding of the instructions with the AR glasses. Participants felt “stupid”, “incompetent”, or “guilty” when not seeing the arrows that were supposed to be obvious. Most of the participants also felt uncomfortable with the aesthetics of the AR glasses due to a lack of discretion of the device and its heaviness. Indeed, AR glasses were described as “conspicuous”, leading some participants to feel ashamed and ridiculous and to avoid the eyes of others in the street: “I crossed the street because I saw people coming in front of me. I wanted to avoid their eyes, because the glasses were not discreet.” Because of the lack of discretion of this device, some participants also mentioned they would be perceived as a disabled or dependent person by the people in the street: “I felt like Ray Charles, but I had no white cane”. AR glasses were also a source of eyestrain and nausea for three participants: “My eyes were tired at the end.” Because of this, participants were often disappointed by this highly technological device.

4.2.4. Prospective Evaluations

Considering the usefulness of the three wearable devices, the older pedestrians who took part to the study noted that they would be more likely to wear bone conduction headphones (10 participants) or a smartwatch (11 participants) to navigate the city in a near future than AR glasses (two participants).
To improve the wearable devices for a future use, the participants also suggested some changes that could help perceive and understand the navigation instructions, as well as the comfort with the devices. Most of these comments were related to the intensity and the duration of the instructions provided by the AR glasses, mostly because of the lack of perceptibility of the arrows in outdoor environments (six participants). Some participants also wished the warning vibration of the smartwatch to be stronger and imagined a warning vibration that could be added to the AR glasses as well. For all the devices, a go-ahead instruction as well as a notification of being on the correct street could be added to ensure the user is still on the route (five participants). Finally, a lighter, more discreet device should be used to display arrows in the field of view.
Table 2 sums up the percentage and count of participants who expressed having trouble or evaluated negatively the four navigation aids according to the nine dimensions of UX during the interviews (excluding here the prospective evaluations).
The number of participants who experienced difficulty perceiving the instructions was significantly greater with the AR glasses than with the bone conduction headphones and the smartwatch (χ2(1,36) = 9.03, p < 0.01). The number of participants who experienced a high attentional workload was also significantly greater with the AR glasses than with the bone conduction headphones (χ2(1,36) = 10.6, p < 0.01), the smartwatch (χ2(1,36) = 4.2, p < 0.05), and the usual aid (χ2(1,36) = 4.2, p < 0.05). Similarly, perceived attention to the surroundings was significantly lower with the AR glasses than with the headphones (χ2(1,36) = 15.8, p < 0.001), the smartwatch (χ2(1,36) = 405, p < 0.05), and the participants’ usual aid (χ2(1,36) = 7.42, p < 0.01). More participants also stated they had difficulty looking at the surroundings with the smartwatch than with the bone conduction headphones (χ2(1,36) = 5.8, p < 0.02).
In the learning and indoor context just before the navigation, the spatialized sounds displayed through the bone conduction headphones were more difficult to interpret for significantly more participants than the arrows displayed by the smartwatch (χ2(1,36) = 7.2, p < 0.01). In the outdoor urban context, however, the number of participants who experienced at least one instance of doubt was significantly higher with the usual aid than with both the headphones (χ2(1,36) = 9.75, p < 0.01) and the smartwatch (χ2(1,36) = 7.48, p < 0.01). Similarly, the number of disoriented participant was higher with the AR glasses than with the bone conduction headphones (χ2(1,36) = 4.98, p < 0.05).
Concerning the feelings of the participants, they trusted significantly more the bone conduction headphones (χ2(1,36) = 5.6, p < 0.02) and the smartwatch (χ2(1,36) = 9.75, p < 0.01) than the AR glasses. They were also more enjoying he aesthetics and discretion of the device with the bone conduction headphones (χ2(1,36) = 14.5, p < 0.001) and the smartwatch (χ2(1,36) = 18, p < 0.001) than the AR glasses. Also, their other emotions were more often negative with the AR glasses than with the bone conduction headphones (χ2(1,36) = 4, p < 0.05) and the smartwatch (χ2(1,36) = 5.46, p < 0.02).
As a consequence, the usefulness of the AR glasses was marginal in comparison with the bone conduction headphones (χ2(1,36) = 8, p < 0.01) and the smartwatch (χ2(1,36) = 9.75, p < 0.01).

4.3. Observed Risky Inattentions

In the section above, we looked at the subjective feelings of being inattentive; here, we quantify the observable instances of inattention which are potentially risky. Both observations and interviews are complementary: with the observable behavior, we gather inattentions the participant may not be aware of, and with the interviews we gather inattentions which are not observable (such as, I looked but did not see because I was focused on my thoughts). Table 3 sums up the number of observed instances of risky inattention coded for the video analysis. We did not consider the safe inattentions (e.g., the person stops on the pavement to look at the paper map) nor the inattentions due to an interaction with the researcher (who was following the participant in case of dangerous behavior) and not to the device itself. The risky inattentions due to the devices appear in two situations: when participants focus on the aid messages (listening, sensing, or looking at them) (1) while they were walking or (2) while they stopped when they were already crossing a street.
Considering all the observed instances of inattention, the ones due to the use of the aid were significantly more numerous with the paper or digital maps of the participants than with the AR glasses (χ2(1,31) = 18.6, p < 0.001) and the bone conduction headphones (χ2(1,31) = 11.06, p < 0.001). The participants tended to walk while reading their map, leading to risks and collisions (see Figure 10).
It is in fact surprising that using the glasses does not produce more inattention. This result probably comes from the fact that the participants are afraid of being inattentive with this new device and are more cautious than with their usual map. Also, they are apparently not aware of their inattentions with the usual aid: they do not mention in the interviews that they are afraid of being inattentive with the usual map (habits lowering vigilance), whereas they mention it with the glasses. It is an interesting result highlighting the complementarity of the methodologies.

5. Discussion

This study aimed to compare the navigation experiences (in terms of efficiency, user experience, and inattentions) of older pedestrians using their usual navigation aid (paper and digital maps) and three sensory wearable devices to find their way in an urban environment. The sensory navigation aids were based on market-available devices (AR glasses, bone conduction headphones, and a smartwatch) and turn-by-turn navigation instructions were designed using previous literature among older people (see e.g., [30]).
Most of the participants (16 out of 18) used a paper map when asked to bring their own navigation aid. This observation was in-line with previous studies [27,30] with older people using preferentially paper maps as pedestrians, while GPS have become usual among older drivers.
The results highlighted that the participants took a longer time to reach their destination with their usual aid than with the three wearable devices. This could be explained by the difficulty older pedestrians usually face when navigating an outdoor environment with a map [15,19]. The time needed to prepare the route with the map, in particular, differed a lot among the participants. Preparing the route is a key element for older people in unknown areas [74], and the high interindividual variability among participants may reflect important differences in the participants’ trust in their orientation capabilities [75]. Some studies previously brought into light that older pedestrians varied considerably in the self-evaluation of their spatial abilities [30,76].
The percentage of success in turning into the right street was higher with the bone conduction headphones, the smartwatch, and the usual aid than with the AR glasses. In some cases, the percentage of success was even 100% (with the smartwatch in roundabouts), thereby confirming that technological aids could help compensate the difficulties older pedestrians usually face in urban environments [61]. The percentage of success in roundabouts was close to the percentage of success in simple intersections, although we might have supposed that the difficulty would have been higher in roundabouts [61]. The participants were often more attentive to the instructions in the roundabouts than in the simple intersections because they understood well the functioning of the system in this type of configuration (the use of two instructions in these areas) and were able to anticipate the second instruction to exit to the right street. However, the way people subjectively perceive the environment may differ from that of the geographic databases used by the navigation system. For example, some configurations were labeled as roundabouts by the participants but not by the system, which led to a discrepancy between participants’ expectancies and the system’s functioning. Thus, using geographic databases adapted to the pedestrians’ navigation would be necessary instead of using databases for cars’ navigation, pedestrians having a different perspective than drivers, and more possible paths [27].
The percentage of success in finding the right way was lower with the AR glasses but it did not differ between the bone conduction headphones, the smartwatch and the usual aid of the participants. The average percentage of errors was low among the participants using their own map, which could be explained by the familiarity with this aid and the longer time required to prepare the route before leaving.
Even though the performance did not differ between the bone conduction headphones, the smartwatch, and the usual aid, differences were observed in the way the participants acted to find their way. Using the usual aid seemed to rely on a sum of spatial cues found from both the map (paper or digital) and the environment (topography, street names, etc.). The participants actively searched for these clues to solve the spatial problem they were facing. Hence, the participants were free to choose the clues needed for their navigation at their own pace and thus were possibly more autonomous than with the wearable devices. Searching for clues in the environment, however, requires a high level of attention and it generated many instances of inattention, as was highlighted by the videos’ analysis, even if the participants do not seem conscious of these episodes with their usual aid during the interviews. This highlights a potential benefit of wearable navigation aids in terms of safety with which the number of instances of inattention was low. An active search for clues also requires spatial cognitive abilities, which vary a lot among older people and could be specifically altered [8].
Navigation with turn-by-turn wearable devices relied on a more localized spatial environment (i.e., participants did not have to locate themselves on a map, just to look at the next street to turn in) and was more contingent to the trust the participants had in the navigation aid, as the trust in the device depended a lot on its temporality and its steadiness over time. Even though the participants actively solved spatial problems with the wearable devices too, the matching between the spatial clues found in the environment and the instructions provided by the system depended a lot on the system itself (i.e., the possibility to replay the instructions, the temporality of the instructions, and the rules of the system functioning). This could have led to a higher dependence to the system when navigating with such devices. Noticeably, the participants had the possibility to use a map with the three wearable devices, which they did not, highlighting that the navigation strategies were different with the wearable devices on one hand and the usual aid on the other hand.
The wayfinding processes based on the use of wearable devices such as the ones we developed for the present study purposes seemed to rely, like the usual map, on cognitive abilities (to understand the navigation instructions and mapping it to the environment), but also on other types of abilities related to the use of digital and dynamic artefacts such as memorizing the instructions, interpreting the interface, or understanding and inferring rules from the use of the system [77]. For example, based on the previous instructions they had perceived, participants were able to infer the temporality of the instructions, and knew that the next instructions would be provided 15 to 20 meters before the intersection, otherwise they could go straight forward. In contrast, with their usual map, they tended to count the number of intersections before the next turn. In this way, wearable devices could thereby benefit older people with lower spatial cognitive capacities because using such devices rely on other types of abilities that directly depend on the system design.
Beyond poorer performance, the AR glasses also lead to a poorer user experience than the bone conduction headphones and the smartwatch, whatever the UX dimension. While perceiving and understanding the arrows instructions during the indoor familiarization phase was rather easy to the participants, the perceptibility of the instructions and their temporality in an urban navigation context was problematic, as well as the comfort and discretion of the sensory wearable device.
In our analyses, the user experience was shaped by three main phenomena over time. These phenomena are rather congruent with the three layers of need proposed by Fang et al. [78] when using pedestrian navigation aids. Shifts in attention corresponds to the physical sense layer (the perception of visual, auditory, or haptic clues to find one’s way), understanding the situation corresponds to the safety layer (the ability to manage the risk and errors related to the route and the environment), and the affective and aesthetics feelings are related to the mental satisfaction layer (the need for confidence, comfort, and respect).
Shifts in attention: The arrows inlayed in the visual field by AR glasses and the warning vibration of the smartwatch were difficult to perceive for some participants. We have two possible explanations here: (1) this difficult perception is either due to the fact that the visual attention to the surroundings interfere with the visual perception of the arrows superimposed in the same field of view or (2) it may be due to an overly weak intensity and/or an impression of short duration when participants were actively involved in a navigation activity in an urban context. (We note the difference between the perceptibility indoor in a quiet place and in the urban context outdoor.) These results about the importance of perceptibility were already pointed out by [49]. As the perceptibility of the arrows was reduced in the urban navigation context, participants were required to be more attentive to search for and perceive the navigation instructions. This had some consequences on the possibilities for participants to look at the environment and for ensuring their safety. Various strategies were used to improve the perception of the instructions, such as replaying instructions (the number of replayed instructions being higher with the AR glasses than with the two other devices) relying on preparative attention [79], by looking precisely at the AR screen, or immobilizing the arm close to the chest with the smartwatch. Attention sharing also had some consequences on the feeling of safety with AR glasses. The participants tended to be more cautious because they feared they might not be at their best. We also observed that the participants tended to act more promptly, and, maybe, more automatically and instinctively, when being guided by a turn-by-turn device, which could have impacted their safety negatively.
Understanding the situation: We observed more doubts about where to go with the usual map and with the glasses. Contrary to the paper maps that were read at the participant’s own pace, the temporality of instructions (the moment the instruction was provided in comparison with the moment the participant would turn) seemed to be a key element for decision making with the wearable navigation devices. Impressions of unsteady and belated instructions led to difficulty in understanding the situation and had a negative impact on the participants’ trust in the aid, especially with the AR glasses. As previously mentioned, the matching between the spatial clues found in the environment and the instructions depended a lot on the system itself when using a wearable aid (the accuracy of the U-turn instructions, the replay button, the instructions temporality, and the system rules). Providing instructions 10 meters before the intersection appeared to be mandatory to ensure quiet and safe conditions for decision making.
Several elements could impact the temporality of instructions, among which the GPS accuracy and the quality of the connection between the smartphone and the wearable device. Despite poorer GPS accuracy in urban environments [60] and 10-meter-precision of the GPS on smartphones [80], these could not explain the differences subjectively perceived between the AR glasses and the two other devices we used. It could be supposed that the Bluetooth connectivity was poorer with the AR glasses than with the bone conduction headphones and the smartwatch, but we were not able to verify this hypothesis. The higher fear to miss a message and the poorer feeling of safety of the participants with the AR glasses were also likely to impact the perception of the temporality of instructions, as stress could constitute a bias for the perceived duration of stimuli [81].
Affective and aesthetic feelings: the emotions were highly contrasted among the participants during the four routes with the four aids. Some of them were frustrated with not perceiving the instructions, while others were pleasantly surprised by their ability to navigate by themselves with a highly technological device. We observed that comfort and discretion were key concerns for the participants, with the bone conduction headphones and the smartwatch being more positively evaluated than the AR glasses. The AR glasses were described as being heavy on the nose and to lack discretion. Participants wished to be able to hide the device they were using, to look “like everybody” in the street. The aesthetics, discretion (e.g., color and size) and the weight of devices designed to be “wearable” should be considered carefully to boost the physical and psychological comfort among older pedestrians. Aesthetics is sometimes told to be a minor dimension in UX for older people [57,82], who are supposed to focus primarily on the pragmatic qualities of the devices rather than the hedonic ones. In this study, aesthetics was a major concern. In addition, it should be noted that adding sunshades to the AR glasses may have had consequences on the UX by reducing their discretion, even though it aimed primarily to improve the perceptibility of the arrows.
Understandability, perceived usefulness [83], pleasantness [84], and aesthetics [47] are traditionally identified as elements that impact the user experience. Thus, the results of this study are consistent with the literature. However, we note that many dimensions were specific to older people’s user experience with navigation aids. The perceptibility of instructions and the related increase of attentional demand [49], the temporality of instructions, and the accuracy of the GPS signal [50] were indeed discussed in depth in the interviews. Also, the feeling of trust, the feeling of safety, and discretion are often associated with older people’s user experience [52]. The study did not bring into light new dimensions of UX but helped understand how user experience was shaped by all these dimensions over time. Hence, the perceptibility of the instructions and attentional demand impacted the feeling of safety, which is paramount for older people. The older participants were not “passively” following the turn-by-turn instructions provided by the wearable devices, they had to at least understand the messages, the mode of operation of the system, and to keep an eye on the environment, thereby challenging the hypothesis of “infantilization” that is traditionally associated to the use of turn-by-turn aids [23].
While our results highlight that navigation among older pedestrians could be fostered by the use of some wearable devices providing turn-by-turn instructions in comparison to a map, some limitations of our work should be acknowledged. The limited number of participants (18), due to the need to interview them in detail (each participant spent 4 hours in the study) could also be a limit to the generalization of the results. Also, we did not compare the performance and user experience of the older participants to younger ones. Similarly, we were not able to compare performance and user experience between men and women. The route and the interview were slightly different between the usual aid and the three wearable devices (but we observe that the simpler route for the usual aid does not benefit to its efficiency), and we were not able to compare some dimensions between the four conditions (such as aesthetics, as no participant mentioned it with the usual map); a total similarity of the conditions should be preferred in future studies.

6. Conclusions

Our study highlighted that older pedestrians could benefit from the use of bone conduction headphones and a smartwatch providing turn-by-turn instructions to find their way in an urban environment. With these two wearable devices, the number of instances of inattention and the time required to navigate the route was lower in comparison with the usual aid of the participants (mostly paper maps). Also, the user experience and the percentage of success were better than with AR glasses that provided the same types of instructions. The problems faced with the AR glasses could be caused by the lack of maturity of this type of devices, being too heavy and projecting visual information that is not sufficiently contrasted to be perceived in outdoors. In future years, this study could be replicated with more advanced AR glasses. Comparisons of performance and user experience should also be made with younger people, and between genders, in order to point out potential differences in the use of such aids.

Author Contributions

Conceptualization, A.M., B.C., and A.D.; Methodology, A.M., B.C., and A.D.; Software, A.M.; Formal Analysis, A.M., B.C., and A.D.; Investigation, A.M.; Writing—Original Draft Preparation, A.M.; Writing—Review and Editing, B.C. and A.D.; Visualization, A.M.; Supervision, B.C. and A.D.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dunbar, G.; Holland, C.; Maylor, E. Older Pedestrians: A Critical Review of the Literature; Road Safety Research Report No 37; University of Warwick: Coventry, UK, 2004. [Google Scholar]
  2. Tournier, I.; Dommes, A.; Cavallo, V. Review of safety and mobility issues among older pedestrians. Accid. Anal. Prev. 2016, 91, 24–35. [Google Scholar] [CrossRef] [Green Version]
  3. Levasseur, M.; Desrosiers, J.; Noreau, L. Is social participation associated with quality of life of older adults with physical disabilities? Disabil. Rehabil. 2004, 26, 1206–1213. [Google Scholar] [CrossRef] [PubMed]
  4. Commissariat Général du Développement Durable (CGDD). La Mobilité des Français: Panorama Issu de l’enquête Nationale Transports et Déplacements 2008; Service de l’observation et des Statistiques: Paris, France, 2010. [Google Scholar]
  5. Lithfous, S.; Dufour, A.; Després, O. Spatial navigation in normal aging and the prodromal stage of Alzheimer’s disease: Insights from imaging and behavioral studies. Ageing Res. Rev. 2013, 12, 201–213. [Google Scholar] [CrossRef] [PubMed]
  6. Roche, R.P.; Mangaoang, M.; Commins, S.; O’Mara, S.M. Hippocampal contributions to neurocognitive mapping in humans: A new model. Hippocampus 2005, 15, 622–641. [Google Scholar] [CrossRef]
  7. Burles, F.; Guadagni, V.; Hoey, F.; Arnold, A.; Levy, R.M.; O’Neill, T.; Iaria, G. Neuroticism and self-evaluation measures are related to the ability to form cognitive maps critical for spatial orientation. Behav. Brain Res. 2014, 271, 154–159. [Google Scholar] [CrossRef] [PubMed]
  8. Head, D.; Isom, M. Age effects on wayfinding and route learning skills. Behav. Brain Res. 2010, 209, 49–58. [Google Scholar] [CrossRef] [PubMed]
  9. Taillade, M.; N’Kaoua, B.; Arvind Pala, P.; Sauzéon, H. Cognition spatiale et vieillissement: Les nouveaux éclairages offerts par les études utilisant la réalité virtuelle. Rev. Neuropsychol. 2014, 6, 36–47. [Google Scholar] [CrossRef]
  10. Klencklen, G.; Després, O.; Dufour, A. What do we know about aging and spatial cognition? Reviews and perspectives. Ageing Res. Rev. 2012, 11, 123–135. [Google Scholar] [CrossRef]
  11. Poucet, B.; Lenck-Santini, P.P.; Hok, V.; Save, E.; Banquet, J.P.; Gaussier, P.; Muller, R.U. Spatial Navigation and Hippocampal Place Cell Firing: The Problem of Goal Encoding. Rev. Neurosci. 2004, 5, 89–107. [Google Scholar] [CrossRef]
  12. Chittaro, L.; Burigat, S. Augmenting audio messages with visual directions in mobile guides: An evaluation of three approaches. In Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices & Services, Salzburg, Austria, 19–22 September 2005. [Google Scholar]
  13. Emmerson, C.; Guo, W.; Blythe, P.; Namdeo, A.; Edwards, S. Fork in the road: In vehicle navigation systems and older drivers. Transp. Res. Part F Traffic Psychol. Behav. 2013, 21, 173–180. [Google Scholar] [CrossRef]
  14. Dejos, M. Approche Ecologique de L’évaluation de la Mémoire Episodique et de la Navigation Spatiale Dans la Maladie D’alzheimer. Ph.D. Thesis, Université Bordeaux, Bordeaux, France, 2013. [Google Scholar]
  15. Aubrey, J.B.; Li, K.Z.; Dobbs, R. Age differences in the interpretation of misaligned “You-Are-Here” maps. J. Gerontol. 1994, 49, 29–31. [Google Scholar] [CrossRef]
  16. Kolbe, T.H. Augmented Videos and Panoramas for Pedestrian Navigation. In Proceedings of the 2nd Symposium on Location Based Services and TeleCartography, Vienna, Austria, 28–29 January 2004. [Google Scholar]
  17. Tkacz, S. Learning map interpretation: Skill acquisition and underlying abilities. J. Environ. Psychol. 1998, 18, 237–249. [Google Scholar] [CrossRef]
  18. Craik, F.I.M.; Byrd, M. Aging and Cognitive Deficits. In Aging and Cognitive Processes. Advances in the Study of Communication and Affect; Craik, F.I.M., Trehub, S., Eds.; Springer: Boston, MA, USA, 1982. [Google Scholar]
  19. Liljedahl, M.; Lindberg, S.; Delsing, K.; Polojärvi, M.; Saloranta, T.; Alakärppä, I. Testing Two Tools for Multimodal Navigation. Adv. Hum. Comput. Interact. 2012. [Google Scholar] [CrossRef]
  20. McGookin, D.; Brewster, S.; Priego, P. Audio bubbles: Employing non-speech audio to support tourist wayfinding. In Proceedings of the HAID 2009, Dresden, Germany, 10–11 September 2009. [Google Scholar]
  21. Wenig, N.; Wenig, D.; Ernst, S.; Malaka, R.; Hecht, B.; Schöning, J. Pharos: Improving navigation instructions on smartwatches by including global landmarks. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, Wien, Austria, 4–7 September 2017. [Google Scholar]
  22. Frey, M. CabBoots: Shoes with Integrated Guidance System. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, Baton Rouge, LA, USA, 15–17 February 2007. [Google Scholar]
  23. Pielot, M.; Boll, S. Tactile Wayfinder: Comparison of tactile waypoint navigation with commercial pedestrian navigation systems. Lect. Notes Comput. Sci. 2010, 6030, 76–93. [Google Scholar] [CrossRef]
  24. Kray, C.; Coors, V.; Elting, C.; Laakso, K. Presenting Route Instructions on Mobile Devices: From Textual Directions to 3D Visualization. Explor. Geovisualization 2005, 529–550. [Google Scholar] [CrossRef]
  25. Walther-Franks, B.; Malaka, R. Evaluation of an Augmented Photograph-based Pedestrian Navigation System. In Proceedings of the 9th International Symposium on Smart Graphics, Salamanca, Spain, 28–30 May 2008. [Google Scholar]
  26. Giannopoulos, I.; Kiefer, P.; Raubal, M. Gaze Nav: Gaze-Based Pedestrian Navigation. In Proceedings of the Mobile HCI’15, Copenhagen, Denmark, 24–27 August 2015; pp. 337–346. [Google Scholar]
  27. Wither, J.; Au, C.E.; Rischpater, R.; Grzeszczuk, R. Moving beyond the map. In Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, Munich, Germany, 27–30 August 2013. [Google Scholar]
  28. Kim, S.; Dey, A.K. Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009. [Google Scholar]
  29. Hile, H.; Liu, A.; Borriello, G. Visual Navigation for mobile devices. IEEE Multimed. 2009, 17, 16–25. [Google Scholar] [CrossRef]
  30. Montuwy, A.; Dommes, A.; Cahour, B. Helping Older Pedestrians Navigate in the City: Comparisons of Visual, Auditory and Haptic Guidance Instructions in a Virtual Environment. Behav. Inf. Technol. 2018. [Google Scholar] [CrossRef]
  31. Rehrl, K.; Häusler, E.; Leitinger, S.; Bell, D. Pedestrian navigation with augmented reality, voice and digital map: Final results from an in situ field study assessing performance and user experience. J. Locat. Based Serv. 2014, 8, 75–96. [Google Scholar] [CrossRef]
  32. Wingfield, A.; Tun, P.; Mccoy, S.L.; Mccoy, L. Hearing Loss in Older Adulthood: What It Is and How It Interacts with Cognitive Performance. Curr. Dir. Psychol. Sci. 2013, 14, 144–148. [Google Scholar] [CrossRef]
  33. Bouccara, D.; Ferrary, E.; Mosnier, I.; Bozorg Grayeli, A.; Sterkers, O. Presbyacousie. EMC Oto-Rhino-Laryngol 2005, 2, 329–342. [Google Scholar] [CrossRef]
  34. Heinroth, T.; Buhler, D. Arrigator—Evaluation of a speech-based pedestrian navigation system. In Proceedings of the 4th IET International Conference, Santa Margherita Ligure, Italy, 14–16 July 2008. [Google Scholar]
  35. Holland, S.; Morse, D.R.; Gedenryd, H. AudioGPS: Spatial audio navigation with a minimal attention interface. Pers. Ubiquitous Comput. 2002, 6, 253–259. [Google Scholar] [CrossRef]
  36. Etter, R.; Specht, M. Melodious walkabout: Implicit navigation with contextualized personal audio contents. Ext Proc. Pervasive 2005, 49, 43–49. [Google Scholar]
  37. Wilson, J.; Walker, B.N.; Lindsay, J.; Cambias, C.; Dellaert, F. SWAN: System for wearable audio navigation. In Proceedings of the International Symposium on Wearable Computers, ISWC, Boston, MA, USA, 11–13 October 2007. [Google Scholar]
  38. Tran, T.V.; Letowski, T.; Abouchacra, K.S. Evaluation of acoustic beacon characteristics for navigation tasks. Ergonomics 2000, 43, 807–827. [Google Scholar] [CrossRef]
  39. Goodman, J.; Brewster, S.; Gray, P. How can we best use landmarks to support older people in navigation? Behav. Inf. Technol. 2005, 24, 3–20. [Google Scholar] [CrossRef] [Green Version]
  40. Brunet, L. Etude Ergonomique de la Modalité Haptique Comme Soutien à L’activité de Déplacement Piéton Urbain: Un Projet de Conception de Produit Innovant. Ph.D. Thesis, Université Paris Sud, Orsay, France, 2014. [Google Scholar]
  41. Pielot, M.; Poppinga, B.; Heuten, W.; Boll, S. Pocketnavigator: Studying tactile navigation systems in-situ. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems CHI’12, Austin, TX, USA, 5–10 May 2012. [Google Scholar]
  42. Wickremaratchi, M.; Llewelyn, L.G. Effects of ageing on touch. Postgrad. Med. J. 2006, 82, 301–304. [Google Scholar] [CrossRef]
  43. Meier, A.; Matthies, D.; Urban, B.; Wettach, R. Exploring Vibrotactile Feedback on the Body and Foot for the Purpose of Pedestrian Navigation. In Proceedings of the 2nd International Workshop on Sensor-Based Activity Recognition and Interaction, Rostock, Germany, 25–26 June 2015. [Google Scholar]
  44. Czaja, S.J.; Charness, N.; Fisk, A.D.; Hertzog, C.; Nair, S.N.; Rogers, W.A.; Sharit, J. Factors. Predicting the Use of Technology: Findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE). J. Psychol. Aging 2006, 21, 333–352. [Google Scholar] [CrossRef]
  45. Montuwy, A.; Dommes, A.; Cahour, B. What Sensory Pedestrian Navigation Aids for the Future? A Survey Study. In Proceedings of the 2018 Computer-Human Interaction Conference, CHI’18, Montreal, QC, Canada, 21–26 April 2018. [Google Scholar]
  46. Février, F.; Gauducheau, N.; Jamet, E.; Rouxel, G.; Salembier, P. La prise en compte des affects dans le domaine des interactions homme-machine: Quels modèles, quelles méthodes, quels bénéfices ? Le Trav. Hum. 2011, 2, 183–201. [Google Scholar] [CrossRef]
  47. Hassenzahl, M. The thing and I: Understanding the relationship between user and product. In Funology: From Usability to Enjoyment; Blythe, M., Overbeeke, C., Monk, A.F., Wright, P.C., Eds.; Kluwer: Dordrecht, The Netherlands, 2003; pp. 31–42. [Google Scholar]
  48. Thüring, M.; Mahlke, S. Usability, aesthetics and emotions in human–technology interaction. Int. J. Psychol. 2007, 42, 253–264. [Google Scholar] [CrossRef]
  49. Arning, K.; Ziefle, M.; Li, M.; Kobbelt, L. Insights into user experiences and acceptance of mobile indoor navigation devices. In Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, Ulm, Germany, 4–6 December 2012. [Google Scholar]
  50. Sarjakoski, L.T.; Nivala, A.M. Adaptation to Context—A Way to Improve the Usability of Mobile Maps. In Map-Based Mobile Services, Theories, Methods and Implementations; Meng, L., Zipf, A., Reichenbacher, T., Eds.; Springer: New York, NY, USA, 2005; pp. 107–123. [Google Scholar]
  51. Kim, S.; Gajos, K.; Muller, M.; Grosz, B. Acceptance of Mobile Technology by Older Adults: A Preliminary Study. In Proceedings of the 2016 Conference on Mobile Human Computer Interaction, San Jose, CA, USA, 7–12 May 2016. [Google Scholar]
  52. Holzinger, A.; Searle, G.; Kleinberger, T.; Seffah, A.; Javahery, H. Investigating Usability Metrics for the Design and Development of Applications for the Elderly. In ICCHP 2008, LNCS 5105; Springer-Verlag: Berlin/Heidelberg, Germany, 2005; pp. 98–105. [Google Scholar]
  53. Lee, C.; Coughlin, J.F. PERSPECTIVE: Older Adults’ Adoption of Technology: An Integrated Approach to Identifying Determinants and Barriers. J. Prod. Innov. Manag. 2014. [Google Scholar] [CrossRef]
  54. Koelle, M.; El Ali, A.; Cobus, V.; Heuten, W.; Boll, S. All about Acceptability? Identifying Factors for the Adoption of Data Glasses. In Proceedings of the CHI’17, Denver, CO, USA, 6–11 May 2017. [Google Scholar]
  55. Westerkull, P.; Sjostrom, D. Performance and design of a new bone conduction device. In Proceedings of the Fifth International Congress on Bone Conduction Hearing and Related Technologies, Lake Louise, AB, Canada, 20–23 May 2015. [Google Scholar]
  56. Kim, K.J.; Shin, D.H. An acceptance model for smart watches: Implications for the adoption of future wearable technology. Internet Res. 2015, 25, 527–541. [Google Scholar] [CrossRef]
  57. Faisal, M.; Yusof, M.; Romli, N. Design for Elderly Friendly: Mobile Phone Application and Design that Suitable for Elderly. Int. J. Comput. Appl. 2014, 95, 3. [Google Scholar]
  58. Fisk, A.; Rogers, W.; Charness, N.; Czaja, S.; Sharit, J. Designing for Older Adults: Principles and Creative Human Factors Approaches; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  59. Sjolinder, M.; Hook, K.; Nilsson, L.G.; Andersson, G. Age differences and the acquisition of spatial knowledge in a three-dimensional environment: Evaluating the use of an overview map as a navigation aid. Int. J. Hum. Comput. Stud. 2005, 63, 537–564. [Google Scholar] [CrossRef]
  60. Marais, J.; Ambellouis, S.; Meurie, C.; Ruichek, Y. Image processing for a more accurate GNSS-based positioning in urban environment. In Proceedings of the 22nd ITS World Congress, Bordeaux, France, 5–9 October 2015. [Google Scholar]
  61. Davidse, R. Assisting the Older Driver: Intersection Design and in-Car Devices to Improve the Safety of the Older Driver. Ph.D. Thesis, University of Groningen, Groningen, Netherlands, 2007. [Google Scholar]
  62. Oulasvirta, A.; Tamminen, S.; Roto, V.; Kuorelahti, J. Interaction in 4-Second Bursts: The Fragmented Nature of Attentional Resources in Mobile HCI. In Proceedings of the 1st Conference on Human Factors in Computing Systems CHI 2005, Portland, OR, USA, 2–7 April 2005. [Google Scholar]
  63. Ng, A.; Chan, A. Finger Response Times to Visual, Auditory and Tactile Modality Stimuli. In Proceedings of the International MultiConference of Engineers and Computer Scientists, IMECS 2012, Hong Kong, China, 14–16 March 2012. [Google Scholar]
  64. Montuwy, A.; Cahour, B.; Dommes, A. Questioning User Experience: A Comparison between Visual, Auditory and Haptic Guidance Messages among Older Pedestrians. In Proceedings of the 2017 CHItaly Conference on Computer-Human Interaction, Cagliari, Italy, 18–20 September 2017. [Google Scholar]
  65. Zhao, Y.; Hu, M.; Hashash, S.; Azenkot, S. Understanding Low Vision People’s Visual Perception on Commercial Augmented Reality Glasses. In Proceedings of the 2017 Computer-Human Interaction Conference, CHI’17, Denver, CO, USA, 6–11 May 2017. [Google Scholar]
  66. Vermersch, P. L’entretien D’explicitation; ESF Editeur: Paris, France, 1994. [Google Scholar]
  67. Cahour, B.; Salembier, P.; Zouinar, M. Analysing Lived Experience of Activity. Trav. Hum. 2016, 79, 2. [Google Scholar]
  68. Palmer, S.; Schloss, K. An ecological valence theory of human color preference. Proc. Natl. Acad. Sci. USA 2010, 107, 8877–8882. [Google Scholar] [CrossRef] [Green Version]
  69. Créno, L.; Cahour, B. Chronicles of lived experiences for studying the process of trust in carpooling. In Proceedings of the European Conference in Cog. Ergonomics, Vienna, Austria, 1–3 September 2014. [Google Scholar]
  70. Light, A. Adding method to meaning: A technique for exploring peoples’ experience with technology. Behav. Inf. Technol. 2006, 25, 175–187. [Google Scholar] [CrossRef]
  71. Obrist, M.; Comber, R.; Subramanian, S.; Piqueras-Fiszman, B.; Velasco, C.; Spence, C. Temporal, Affective, and Embodied Characteristics of Taste Experiences: A Framework for Design. In Proceedings of the 2014 CHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014. [Google Scholar]
  72. Fereday, J.; Muir-Cochrane, E. Demonstrating Rigor Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development. Int. J. Qual. Methods 2006, 5, 1. [Google Scholar] [CrossRef]
  73. Byrt, T. How Good Is That Agreement? Epidemiology 1996, 7, 561. [Google Scholar] [CrossRef] [Green Version]
  74. Phillips, J.; Walford, N.; Hockey, A.; Foreman, N.; Lewis, M. Older people and outdoor environments: Pedestrian anxieties and barriers in the use of familiar and unfamiliar spaces. Geoforum 2013, 47, 113–124. [Google Scholar] [CrossRef] [Green Version]
  75. Ariel, R.; Moffat, S. Age-related similarities and differences in monitoring spatial cognition. Aging Neuropsychol. Cogn. 2017, 25, 3. [Google Scholar] [CrossRef]
  76. Turano, K.; Munoz, B.; Hassan, S.; Duncan, D.; Gower, E.; Roche, K.; Keay, L.; Munro, C.; West, S. Poor Sense of Direction Is Associated with Constricted Driving Space in Older Drivers. J. Gerontol. 2009, 3, 348–355. [Google Scholar] [CrossRef]
  77. Millerand, F. La dimension cognitive de l’appropriation des artefacts communicationnels. In Internet: Nouvel Espace Citoyen; Jauréguiberry, F., Proulx, S., Eds.; L’Harmattan: Paris, France, 2002; pp. 181–203. [Google Scholar]
  78. Fang, Z.; Li, Q.; Shaw, S.L. What about people in pedestrian navigation? Geo-Spat. Inf. Sci. 2015, 18, 135–150. [Google Scholar] [CrossRef]
  79. Siéroff, E.; Auclair, L. L’attention préparatoire. In Neurosciences Cognitives de L’attention Visuelle; Michael, G., Ed.; Solal Editeur: Marseille, France, 2007. [Google Scholar]
  80. Von Watzdorf, S.; Michahelles, F. Accuracy of Positioning Data on Smartphones. In Proceedings of the Loc Web 2010, Tokyo, Japan, 29 November 2010. [Google Scholar]
  81. Van Hedger, K.; Necka, E.A.; Barakzai, A.K.; Norman, G.J. The influence of social stress on time perception and psychophysiological reactivity. Psychophysiology 2017, 54, 706–712. [Google Scholar] [CrossRef] [Green Version]
  82. Hsieh, M.H.; Pan, S.L.; Setiono, R. Product-, corporate- and country-image dimensions and purchase behavior: A multicountry analysis. J. Acad. Mark. Sci. 2004, 32, 251–270. [Google Scholar] [CrossRef]
  83. Nielsen, J. Usability Engineering; Morgan Kaufmann: San Francisco, CA, USA, 1993. [Google Scholar]
  84. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
Figure 1. Participant wearing the AR glasses with additional sun windows.
Figure 1. Participant wearing the AR glasses with additional sun windows.
Mti 03 00017 g001
Figure 2. Participant wearing the bone conduction headphones.
Figure 2. Participant wearing the bone conduction headphones.
Mti 03 00017 g002
Figure 3. Smartwatch displaying an arrow.
Figure 3. Smartwatch displaying an arrow.
Mti 03 00017 g003
Figure 4. Screenshots of the application. (a) The welcome screen, (b) the choice of the navigation mode, and (c) the navigation screen.
Figure 4. Screenshots of the application. (a) The welcome screen, (b) the choice of the navigation mode, and (c) the navigation screen.
Mti 03 00017 g004
Figure 5. Green arrow inlayed in the field of view through the AR glasses.
Figure 5. Green arrow inlayed in the field of view through the AR glasses.
Mti 03 00017 g005
Figure 6. The four routes that were navigated during the study. (a) The first route, navigated with the usual aid; (b) the second route, navigated with the AR glasses or bone conduction headphones; (c) the third route, navigated with the AR glasses or bone conduction headphones; and (d) the fourth route, navigated with the smartwatch.
Figure 6. The four routes that were navigated during the study. (a) The first route, navigated with the usual aid; (b) the second route, navigated with the AR glasses or bone conduction headphones; (c) the third route, navigated with the AR glasses or bone conduction headphones; and (d) the fourth route, navigated with the smartwatch.
Mti 03 00017 g006
Figure 7. Percentage of additional time needed to navigate the route in comparison with the time computed by Google Maps. Vertical bars represent standard deviations.
Figure 7. Percentage of additional time needed to navigate the route in comparison with the time computed by Google Maps. Vertical bars represent standard deviations.
Mti 03 00017 g007
Figure 8. Percentage of success at simple intersections. Vertical bars represent standard deviations.
Figure 8. Percentage of success at simple intersections. Vertical bars represent standard deviations.
Mti 03 00017 g008
Figure 9. Number of times the replay button was pressed. Vertical bars represent standard deviations.
Figure 9. Number of times the replay button was pressed. Vertical bars represent standard deviations.
Mti 03 00017 g009
Figure 10. The participant was reading her map while walking and could not avoid the collision with a blind person approaching from her.
Figure 10. The participant was reading her map while walking and could not avoid the collision with a blind person approaching from her.
Mti 03 00017 g010
Table 1. User experience (UX) dimensions and phenomena highlighted from the analysis of the participants’ interviews and the literature.
Table 1. User experience (UX) dimensions and phenomena highlighted from the analysis of the participants’ interviews and the literature.
UX DimensionsPhenomena Impacting UX Over Time
Perceptibility of the instructionsShifts in attention
Mental workload and attention to the instructions
Attention to the surroundings and feeling of safety
Ease of learning of the instructionsUnderstanding of the situation
Understandability of the instructions in urban context
Trustworthiness Affective and aesthetic feelings
Aesthetics and discretion
Other feelings
Perceived usefulnessProspective evaluations
Participants’ expectations about navigation aid
Table 2. Number and percentage of participants facing difficulties while navigating the city with the four aids. The percentages in bold font are significantly lower than the ones that are underlined on the same row. NA means the related dimension was absent in the interviews.
Table 2. Number and percentage of participants facing difficulties while navigating the city with the four aids. The percentages in bold font are significantly lower than the ones that are underlined on the same row. NA means the related dimension was absent in the interviews.
Usual AidAR GlassesBone Conduction h.Smartwatch
Bad perceptibility of the instructions44% (8)72% (13)22% (4)22% (4)
High mental workload and attention to the instructions22% (4)56% (10)5% (1)22% (4)
Lack of attention to the surroundings17% (3)61% (11)0% (0)28% (5)
Difficulty in learning the instructionsNA11% (2)33% (6)0% (0)
Doubts about the instructions understanding in urban context61% (11)44% (8)11% (2)17% (3)
Weak trustworthiness NA61% (11)22% (4)11% (2)
Lack of aesthetics and discretionNA67% (12)5% (1)0% (0)
Other negative feelings NA67% (12)33% (6)28% (5)
Lack of usefulness for oneselfNA89% (16)44% (8)39% (7)
Table 3. Number of observed instances of inattention with the four navigation aids.
Table 3. Number of observed instances of inattention with the four navigation aids.
Inattentions:Usual AidAR GlassesBone-Cond. H.Smartwatch
Due to the aid20134
Due to the surroundings2860

Share and Cite

MDPI and ACS Style

Montuwy, A.; Cahour, B.; Dommes, A. Using Sensory Wearable Devices to Navigate the City: Effectiveness and User Experience in Older Pedestrians. Multimodal Technol. Interact. 2019, 3, 17. https://doi.org/10.3390/mti3010017

AMA Style

Montuwy A, Cahour B, Dommes A. Using Sensory Wearable Devices to Navigate the City: Effectiveness and User Experience in Older Pedestrians. Multimodal Technologies and Interaction. 2019; 3(1):17. https://doi.org/10.3390/mti3010017

Chicago/Turabian Style

Montuwy, Angélique, Béatrice Cahour, and Aurélie Dommes. 2019. "Using Sensory Wearable Devices to Navigate the City: Effectiveness and User Experience in Older Pedestrians" Multimodal Technologies and Interaction 3, no. 1: 17. https://doi.org/10.3390/mti3010017

Article Metrics

Back to TopTop