Effects of Augmented Reality Object and Texture Presentation on Walking Behavior

: Wearable devices that display visual augmented reality (AR) are now on the market, and we are becoming able to see AR displays on a daily basis. By being able to use AR displays in everyday environments, we can beneﬁt from the ability to display AR objects in places where it has been difﬁcult to place signs, to change the content of the display according to the user or time of day, and to display video. However, there has not been sufﬁcient research on AR displays’ effect on users in everyday environments. In this study, we investigate how users are affected by AR displays. In this paper, we report our research results on the AR displays’ effect on the user’s walking behavior. We conducted two types of experiments—one on the effects of displaying AR objects on the user’s walking path, and the other on the effects of changing the ﬂoor texture by AR on the user’s walking behavior. As a result of the experiments, we found that the AR objects/textures affected the user’s walking behavior.


Introduction
Wearable devices that can display virtual objects as augmented reality (AR) appear one after another, and we are getting ready to use visual AR technology in our daily lives. For example, HoloLens, Magic Leap 1, and Nreal Light have been sold. AR technology can show users virtual objects overlapping with the real-world environment; the user can simultaneously interact with virtual objects and real-world objects. The virtual objects can be presented in any location, regardless of whether it is difficult to set up a signboard. It is possible to change the display depending on the location and time of day. It is possible to change what is displayed according to the user. For daily use in the city, it can be used to display signs for advertising and directions. In addition, it can also display routes to destinations (Google Maps AR), tourist information [1], museums [2,3], shopping applications (IKEA place), and games (Pokémon Go).
However, there are some points to note when displaying virtual objects. For example, the appearance of a machine-like object in a magnificent natural landscape will not be an agreeable sight for the user. Additionally, if the displayed virtual object hinders the user's view or is large, it can cause anxiety and a sense of danger for the user. It is possible that the user who is walking outdoors suddenly stops to look at the virtual object. Since the virtual object is not visible to the people around him/her, it is not predictable and dangerous. Therefore, it is important to show virtual objects with apt appearances, sizes, and positions, according to the situation and the surroundings. A lot of research has been done on the proper display position of virtual objects [4][5][6]. Other studies have found that users tend to focus primarily on the virtual objects, neglecting the physical world beyond them [7][8][9]. Syiem et al. have focused on what users excessively pay attention to in the AR environment's virtual content [10]. They examined the attentional tunneling effect [11] and whether it is observed in users when introduced to modern AR technology in a museum setting. They have argued that understanding user attention when designing AR-supported installations is crucial to improving user experience. Mark et al. have investigated the influence of AR content on user behavior and found that the effects remain even after the device is removed [12]. These studies have shown that there is such a thing as an appropriate placement of virtual objects, and that displayed virtual objects have an effect on people. However, these studies have only investigated the effects of virtual objects in a limited environment. In this study, we investigate how virtual objects placed in everyday environments affect people. In this paper, we focus on walking, which is one of the activities that users perform in their daily lives. We investigate whether users feel uncomfortable due to the virtual objects placed in their environment and whether they are affected by walking.
In this study, we assume an environment in which AR technology is used daily in the city. In order to use AR appropriately and safely in such an environment, it is necessary to investigate the following items about the influence on the users.

•
What are its characteristics?
• How can we use it effectively?
• What kind of usage should we not do?
In this paper, we investigate the influence of the displayed virtual objects on walking. It is expected that the path, speed, and comfort of walking will change depending on the displayed virtual object. By clarifying the above, we aim to be able to display virtual objects appropriately in everyday environments. In addition, we aim to make it easier for users to walk and to guide their walking path by the influence of displayed virtual objects.
In this paper, we conduct two experiments to investigate the change in gait when a virtual object is displayed in front of a pedestrian. In the first experiment, as a basic investigation, we investigate how the gait changes when the virtual object is displayed in front of a pedestrian. In this study, we assume that virtual objects are used to guide pedestrians along a walking path. In the second experiment, we investigate how the gait changes when presented with a modified floor texture change with AR. We investigate whether changing the texture has a positive effect on the user and whether it is dangerous.
The remainder of this paper is organized as follows-in Section 2, we explain related work. We then describe the first experiment in Section 3 and the second experiment in Section 4. In Section 5, we present our considerations. Finally, we present our conclusion and future work in Section 6.
Note that we have already introduced the first experiment in the short paper [13]. In this paper, we reconsider the results and discussion and add a second experiment.

Systems Using AR Technology
Various uses of AR technology have been studied, and it is expected to become common in day-to-day life in the near future. Chang et al. have enhanced the user experience by using AR technology in an art museum [14]. Wang et al. have developed an AR navigation system using 3-D image overlay and stereo tracking for dental surgery [15]. Fang et al. have proposed an AR system that facilitates robot programming and trajectory planning while considering the dynamic constraints of the robots [16]. Using this system, users can preview the simulated motion, perceive any possible overshoot, and resolve discrepancies between the planned and simulated paths before the execution of a task. Walker et al. use AR technology to visualize the motion intentions of robots to improve the efficiency of human-robot collaboration [17]. Seo et al. have proposed an on-site tour guide system using AR where past life is virtually reproduced and visualized at cultural heritage sites [18]. There have been many other uses of AR technology for learning about cultural heritage [19]. Bark et al. have developed an AR navigational aid system that helps a car driver recognize where the car can go around a curve easily [20]. They have confirmed that the driver is able to keep his/her eyes up ahead to be careful. Several studies have used AR technology for learning [9,21].
A number of studies have been conducted to expect the AR technique to be used in many situations, as introduced in the above paragraph. It is expected that AR technology will permeate into our daily life, and it is required that AR technology be used safely and appropriately in our daily-life environment.

Visual Perception of Virtual Objects
Many studies examine the characteristics of a virtual object. It is well known that human visual perception of virtual objects is different from that of real objects [22][23][24]. Peillard et al. have confirmed that, when using an HMD, people tend to perceive virtual objects located on the side farther than the objects located in front of them [25]. Diaz et al. have investigated that people perceive the position of an AR object closer than a real-world object [26]. The experimental results demonstrated that a shadow image is an important cue to estimate the depth. Rosales et al. also have found similar characteristics [27]. Their results demonstrated that people perceive the position of an AR object shown on the ground closer than an object shown above the ground, if the shadow images are not displayed. Ogawa et al. have investigated how virtual hand realism would affect perceived graspable object sizes as the size of the virtual hand changes in the virtual reality (VR) environment [28]. Argelaguet et al. experimented to analyzes the locomotion behavior of users in a large immersive projection environment when avoiding static obstacles [29]. They compared the interaction between real and virtual obstacles during locomotion tasks in a large CAVE-like immersive environment. They found that when participants avoided objects, they avoided virtual objects at a greater distance than real objects. Swan et al. asked participants to perform a walking task in a tablet-based AR display [30]. They measured the distance perception between the virtual object and the real object, and found a significant difference in distance estimation.
There is a possibility that the user may not see the virtual object as the presenter intends. It is necessary to investigate the characteristics of AR displays before exploring its widespread applications. Previous studies have not sufficiently investigated how the user's gait changes when a virtual object is displayed or when an AR texture is displayed on the floor surface in a situation where the user is walking while wearing a wearable AR display. This study will clarify how the user's walking path changes in response to virtual objects and AR textures and how the user feels about the virtual objects. We expect that users will exaggeratedly avoid virtual objects while walking, and that users will be less likely to walk or change their walking speed due to the display of AR textures as a result.

Experimental Purpose
In this experiment, we investigate how users' walking changes when virtual objects are displayed in front of them using AR as a preliminary study. As more and more people will be wearing AR display devices in their daily lives, the number of opportunities to see virtual objects in the city will increase. We will investigate how the virtual objects displayed in the city affect people's walking. We will also examine the impressions that people have of the displayed virtual objects. In this study, we assume a situation in which AR objects are used to guide pedestrians' walking paths.
As described in Section 2, we hypothesize that participants will walk farther away from a virtual object than from a real object. We place a virtual object that blends into the environment and an object that does not blend into the environment. We also hypothesize that the participants will walk further away from the object that does not blend into the environment because it is difficult for them to grasp the distance between them and the object.

Experimental Method
In the experiment, the participant walks outdoors while wearing the AR display device. We investigate how the virtual objects presented on the AR display device change the participant's walking. In this experiment, we use HoloLens as the AR display device. Table 1 lists the specifications of HoloLens. The HoloLens has a narrower viewing angle and lower resolution, but if we can see that users are affected even with such a precision device, we can assume that there will be a stronger effect when we use a higher precision device. Suppose we can conduct the same experiment later using a high-precision device. In that case, we can compare the results with those in this paper to clarify the effects of the devices' different precision.
We prepare four types of virtual objects (Object-1 to Objects-4) and one real object (Signboard), assuming that the objects give instructions to a user for walking direction outdoors. Object-1 is a car, Object-2 is a white cube, and Object-3 is an arrow pointing diagonally downwards. Object-4 is a sentence that says, "Please walk on right/left side" in Japanese. Figure 1 shows the appearances of the objects that a participant sees through HoloLens. Table 2 lists the sizes of each object. The tops of Object-3 and Object-4 are located at 1.0 m from the ground. The real object is a signboard that has a picture and a sentence ("Please walk on right/left side" in Japanese).
There are two types of guidance using AR displays-implicit guidance and explicit guidance. Implicit guidance is less cognitively demanding because the user does not have to think about the meaning of the guidance. However, there are cases where users do not understand the meaning of the guidance and do not behave as the guidance provider intended. When explicit guidance is used, we can expect that the user will behave as the guidance provider intends. However, the user needs to look at the guidance carefully and think about it to understand its meaning. In this experiment, we selected four virtual objects intended for implicit and explicit guidance, respectively.
Object-1 and Object-2 are implicit guidances, and we expect that the participant to walk alongside the objects while avoiding the collision with the virtual objects. Object-3 and Object-4 are explicit guidance, and we expect that the participant can understand the meaning of the instruction clearly. For Object-1, we selected a car object because it is supposed to blend in with the surrounding environment. The participants will be able to walk without discomfort by being shown the object that blends into the surrounding environment. For Object-2, we selected a white rectangular object, assuming that it would not blend in with the surrounding environment. The participants will have difficulty walking and feel uncomfortable when the object that does not blend into the surrounding environment is displayed. For Object-3, we selected an arrow, assuming that the amount of information in the guidance is small. Different participants will have different walking paths because of the small amount of information. For Object-4, we selected a sentence, assuming that the amount of information is enough. The participants will need to approach the object because they need to read the text, although the amount of information is sufficient. Table 3 lists the classified features of each object.

Virtual Objects Real Object
Implicit Explicit 1-2 Object-1 (Car) Object-3 (Arrow) Signboard Object-2 (Cube) Object-4 (Sentence) Electronics 2021, 10, 702 6 of 24 Figure 2 shows the experiment environment. We conducted the experiment in an outdoor parking lot with a width of approximately seven meters. Before starting the experiment, the participant wears HoloLens, which shows nothing and walks for approximately 30 s to get used to walking while wearing HoloLens as a test walking.  After the test walking, the participant freely walks toward the goal line, which is 16 m from the start point, while seeing virtual objects or the signboard. We measure the distance deviated in the sideways direction from the position straight ahead from the start point at every two meters up to the goal line, with the guidance direction as a positive value. We randomly set the side (left or right) to guide the participant by each object. Also, we randomly set the order of showing the objects. Each participant walks five times total. Each object is displayed so that the front of the object is positioned six meters from the start point. When guiding to the right, we place the right end of the object straight ahead from the start point, as shown in Figure 2. This prevents the difference in object size from affecting the distance deviated in the sideways direction. Figures 1 and 2 show the situation when guiding to the right.

Direction
After each walk, the participant answers four questions. As listed in Table 4, Question-A is, "Did you feel that the object blended with the surroundings?". Question-B is, "Did you feel that you were instructed to go to a particular direction by the object?". Question-C is, "Did you feel that it was difficult to walk because of the object?". Question-D is "Did you feel uncomfortable because of the object?". The participant answers via a 7-point Likert scale ranging from "Strongly Disagree" to "Strongly Agree".

Question Content
A Did you feel that the object blended with the surroundings? B Did you feel that you were instructed to go to a particular direction by the object? C Did you feel that it was difficult to walk because of the object? D Did you feel uncomfortable because of the object?
To measure the distance deviated in the sideways direction, we set a video camera two meters behind the start point. The experimenter walks approximately one meter behind Electronics 2021, 10, 702 7 of 24 the participant. The experimenter raises his hand every time the participant passes the measurement points where are set at every two meters. After the experiment, we measure the distance using frames from the video of the time when the experimenter has raised his hand.
The participants were 10 men between the ages of 21 and 25. They were undergraduate and graduate students. They were visually healthy. Figure 3 shows the results of the sideways movement distance. The gray lines show the results for each participant, and the orange lines show the averages (error bars are the standard deviation). In the experiment of Object-3, the result of one participant was excluded as he stopped after walking approximately ten meters. Figure 4 shows a summary of the average.   The maximum values of the averages of Object-1 to Object-4 were more than 1.0 m, and the maximum value of the averages of the real object was approximately 0.80 m. The participants moved sideways more when seeing the virtual objects compared to when seeing the real object. It seems that a reason for this difference is the movement of the position where the virtual objects are shown as the HoloLens system cannot fix the position firmly. Another reason could be that a person perceives the position of a virtual object closer than a real object [26,27]. There is a possibility that the participants perceived the sizes of the virtual objects larger than the size intended, and they moved longer sideways. The values of the standard deviation of the virtual objects were larger than the value of the real object. The reason seems to be that there is an individual difference when seeing a virtual object. Figure 5 shows the results of four participants. Participant-1 rapidly started moving sideways at the time when he arrived at a position directly in front of the virtual objects. There is a possibility that a user gets close to the virtual object. In the future, we will investigate this phenomenon in detail by increasing the number of participants.

Object-1 (Car)
Object-3 (Arrow) Signboard Object-2 (Cube) Object-4 (Sentence) objects' front position  Statistical tests were conducted on the participants' movements around six meters where the object was displayed, focusing on the distances at four, six, and eight meters, and testing for two factors-the point factor and the object factor. The mean values of the participants at four, six and eight meters are shown in Figure 6 (the error bars are the standard deviations). Figure 7 shows the mean values for the 4-m, 6-m, and 8-m points. A within-participant ANOVA of the two factors showed that main effect of the point factor was significant (F(2, 18) = 24.52, p = 0, η p 2 = 0.732, 1 − β = 0.999), main effect of the object factor tended to be significant (F(4, 36) = 2.164, p = 0.092, η p 2 = 0.194, 1 − β = 0.742), and interaction the point factor × the object factor tended to be significant (F(8, 72) = 1.988, p = 0.06, η p 2 = 0.181, 1 − β = 0.876). As a result of Mauchly's sphericity test for the effects with more than two degrees of freedom that showed significance, the main effect of the point factor was significant (Mauchly's W = 0.151, p = 0). Therefore, we conducted a modified Greenhouse-Geisser test with the degree of freedom adjustment coefficient. As a result, the main effect of the point factor was found to be significant (G-G corrected p = 0). The main effect of the object factor was found to be insignificant (Mauchly's W = 0.105, p = 0.059). The results for interaction the point factor×the object factor were significant (Mauchly's W = 0, p = 0.001). The results of the adjusted test showed that the interaction the point factor×the object factor did not reach significance (G-G corrected p = 0.143).
In the following, we proceeded with the analysis for reference. Multiple comparisons (α = 0.05, two-tailed) with the corresponding t-test for the main effect of the point factor, which showed significance, revealed that the mean of 62.2 for four meters point was significantly smaller than the mean of 88.6 for six meters point (t(49) = 8.212, adjustedp = 0), the mean of 62.2 for four meters point was significantly smaller than the mean of 100.5 for eight meters point (t(49) = 7.895, adjustedp = 0), and the mean of 88.6 for six meters point was significantly smaller than the mean of 100.5 for eight meters point (t(49) = 4.872, adjustedp = 0). For the main effect of the object factor, multiple comparisons with corresponding t-tests (α = 0.05, two-tailed) showed that the mean of 96.3 for Obect-1 was significantly greater than the mean of 64.0 for Signboard (t(29) = 4.669, adjustedp = 0), the mean of 84.3 for Object-2 was significantly greater than the mean of 64.0 for Signboard (t(29) = 3.438, adjustedp = 0.006), the mean of 75.4 for Object-3 tended to be significantly smaller than the mean of 98.7 for Object-4 (t(29) = 3.438, adjustedp = 0.006), and the mean of 98.7 for Object-4 was significantly greater than the mean of 64.0 for Signboard (t(29) = 6.025, adjustedp = 0). The method of Benjamini & Hochberg [31] was used to adjust the above p-values. Though this analysis is informal, with the exception of Object-3, we found that participants walked further away from the virtual objects than from the real object. The results support the hypothesis that participants walk a wider distance away from a virtual object than a real object. On the other hand, there was no change in the participants' walking paths for the object that blended into the environment and for the object that did not.
In this paragraph, we compare the results of the implicit objects with explicit objects. The implicit objects are Objects-1 and Objects-2. The explicit objects are Objects-3 and Objects-4. However, the value of the standard deviation of Objects-3 (Arrow) was big. There were some opinions that "I thought that there was something at the point of the arrow." We expected Object-3 to be an explicit guidance, but in fact the participants could not understand the meaning of the instruction. Therefore, we do not treat Object-3 as an explicit object in the following. Comparing the results of the implicit objects with the explicit object, we see that the participants tend to return to the center after passing the implicit objects, although no statistically significant differences were found in the above tests. In contrast, they continued to walk on left/right side after passing the explicit object. We can also see the tendency in the case of the real object. In the experiment, as we used objects (Object-1 and Object-2) that made the participants dodge them, it seems that they returned to the center. It seems that the tendency is similar to the real-world case, like there is a car or a cardboard box. If we want to get a user to keep walking on one side using an implicit object, we should use multiple implicit objects lined up. The values of the standard deviation of the explicit object were smaller than that of the implicit objects. We can see that the individual difference of the sideways movement is bigger when seeing the implicit objects.
Electronics 2021, 1, 0 10 of 24 using an implicit object, we should use multiple implicit objects lined up. The values of the standard deviation of the explicit object were smaller than that of the implicit objects. We can see that the individual difference of the sideways movement is bigger when seeing the implicit objects. Comparing Object-1 (that blends with the surroundings) with Object-2 (that does not match the surroundings), there is no difference between maximum values of the average. We can confirm that the participants started to move sideways earlier when seeing Object-1 than when seeing Object-2 from the graph of Figure 4. As mentioned above, Participant-1 (who had the result of Figure 5) rapidly started to move sideways when seeing the AR objects compared to the real-world object. There is a possibility that people start to avoid an object at a relatively distant point if the object blends with the surroundings. standard deviation of the explicit object were smaller than that of the implicit objects. We can see that the individual difference of the sideways movement is bigger when seeing the implicit objects.  Comparing Object-1 (that blends with the surroundings) with Object-2 (that does not match the surroundings), there is no difference between maximum values of the average. We can confirm that the participants started to move sideways earlier when seeing Object-1 than when seeing Object-2 from the graph of Figure 4. As mentioned above, Participant-1 (who had the result of Figure 5) rapidly started to move sideways when seeing the AR objects compared to the real-world object. There is a possibility that people start to avoid an object at a relatively distant point if the object blends with the surroundings.  Comparing Object-1 (that blends with the surroundings) with Object-2 (that does not match the surroundings), there is no difference between maximum values of the average. We can confirm that the participants started to move sideways earlier when seeing Object-1 than when seeing Object-2 from the graph of Figure 4. As mentioned above, Participant-1 (who had the result of Figure 5) rapidly started to move sideways when seeing the AR objects compared to the real-world object. There is a possibility that people start to avoid an object at a relatively distant point if the object blends with the surroundings. Figures 8-11 show the results of the Questionnaire. We conducted the Friedman test on each result, which showed significant differences in Question-A (chi-squared = 26.0, df = 4, p < 0.01), Question-B (chi-squared = 24.3, df = 4, p < 0.01), Question-C (chi-squared = 12.9, df = 4, p < 0.05), and Question-D (chi-squared = 9.74, df = 4, p < 0.05). Then, we conducted the Wilcoxon rank sum test with Holm correction on each result, which showed that significant tendencies in Question-A and Question-B. In results of Question-A, there are differences between Object-1 (Car) and Object-2 (Cube), Object-2 and Object-4 (Sentence), Object-2 and the real object, and Object-3 (Arrow) and the real object, respectively. In results of Question-B, there are differences between Object-1 and the real object, and Object-2 and the real object, respectively. There was no significant difference for Question-C and Question-D.  Object-1 (Car)

Object-3 (Arrow)
Object-4 (Sentence) Real object (Signboard) †: p < 0.10 † † Figure 9. The result of Question-B: Did you feel that you were instructed to go to a particular direction by the object?
Relationship between Question-A, Question-C, and Question-D.There was a trend that the higher the value of Question-A, the lower the value of Question-C and Question-D. If the value of Question-A is high, the participants feel that the shown object blends with the surrounding. If the values of Question-C and Question-D are high, the participants feel uncomfortable for walking and to view. From the result, we can see that the object that blends with the surroundings is better in the case when we want to guide for walking direction outdoors. Comparison of the virtual objects and the real object. Comparing the results of the virtual objects (Object-1 to Object-4) with the real object (Signboard), the value of the real object was higher than values of Object-2 and Object-3 on Question-A. Object-1 was able to blend with surroundings, similar to the real object since we experimented in the parking lot.
Although the values of the real object were the lowest on Question-C and Question-D, the values of Object-1 and Object-4 were similar to the values of the real object. We confirmed that Object-1 and Object-4 did not cause discomfort at the parking lot.
Comparison of the implicit objects and the explicit object. Comparing the results of the implicit objects (Object-1 and Object-2) with the explicit object (Object-4), the value of the implicit objects was lower than value of the explicit object on Question-B. We confirmed that we were able to guide the participants to one side by the implicit virtual objects without causing a high cognitive load.
Features of Object-2 (Cube). The results of Object-2 (Cube) show low value for Question-A, and high values for Question-C and Question-D. Object-2 did not seem to blend into the surroundings as we expected. As a result, we found that it is difficult to walk because of the object and felt that the object was uncomfortable for the participants. This result indicates that virtual objects should not be displayed if they do not blend in too well with the surrounding environment. On the other hand, if the virtual object does not blend into the surrounding environment, the user's gaze can be directed to the object, which can be used to present information that should not be overlooked. In the future, we plan to further explore the use of virtual objects that do not blend into the surrounding environment.
Features of Object-4 (Sentence). The results of Object-4 (Sentence) show a high value for Question-A, and low values for Question-C and Question-D. The participants accepted the fact that the sentence was floating, even though this would not be possible in a real environment. In general, people have many opportunities to see text on signs and posters in the city. Therefore, the fact that the text was floating seemed to be accepted. We found that the display of text can be used without causing discomfort when there is information to be conveyed explicitly.

Experimental Purpose
In Experiment 1, we investigated the effect of displaying a single virtual object on the user's walking. In Experiment 2, we investigate the effect of displaying AR as if the texture of a floor surface such as a corridor has changed on the user's walking. We believe that people are more strongly influenced by changes in a large area such as the floor surface. It is possible that the user's gait changes as an unconscious or physiological reaction regardless of the user's intention. It has been investigated that visual stimuli can affect walking. For example, Furukawa et al. have proposed a method that changes a pedestrian's course using a vection field in which the optical flow is presented on the ground [32]. The field is created using a lenticular lens whose appearance changes according to the angle of sight. In the experiment, we investigate the effects of changes in floor texture using three different textures. The three textures for the AR display are an escalator, a moving walkway, and moving stripes.
We hypothesize that displaying an escalator texture will generate an escalator effect [33,34], making it harder for participants to climb the stairs. We hypothesize that participants will feel that their walking speed is faster or slower by displaying a moving walkway. We hypothesize that the display of moving stripes will change the participants' walking path by causing vection in the moving stripes' direction.
We use HoloLens as a device to display AR. Although AR is displayed on the floor or stairs, the part of the floor or stairs where AR is not displayed will be visible. Therefore, we cover a part of the HoloLens with black felt as shown in Figure 12 so that only the display part is visible.

Experimental Method
We investigate whether the broken escalator phenomenon occurs when the AR display of a stationary escalator is applied to stairs. The broken escalator phenomenon [33,34] is a sensation of a loss of balance or dizziness when riding an inoperable escalator. We investigate whether the effects that occur in real-world environments can also be generated by AR displays. We hypothesize that the display of the AR texture will create a broken escalator phenomenon, making it difficult for the participant to climb the stairs.
The participant wearing the HoloLens climbs six stairs under three conditions: stairs without AR display, stairs with a stationary escalator (Figure 13a), and stairs with a white staircase (Figure 13b). Since the misalignment between the staircase and the AR display may make it difficult to climb, we also display the white staircase in order to investigate the effect of the escalator display after eliminating that factor as shown in Figure 13b.  After climbing the stairs with the AR display, the participant answers a question, "Compared to the stairs without the AR display, the stairs are more difficult to climb.". After climbing the stairs with the stationary escalator, the participant answers another question-"The AR escalator looked like the real one". The participant answers via a 5-point Likert scale ranging from "Strongly Disagree" to "Strongly Agree".
The participants were five people (three males and two females) between the ages of 21 and 24. They were undergraduate and graduate students. They were visually healthy. Figure 14 shows the results of the questionnaire on the difficulty of climbing. Figure 15 shows the results of the questionnaire asking whether the AR escalator looked like a real escalator or not. The results in Figure 14 are low, indicating that the broken escalator phenomenon was not generated. Stationary escalator White staircase Figure 14. Results of the questionnaire "Compared to the stairs without the AR display, the stairs are more difficult to climb." However, this does not mean that the AR display cannot generate the broken escalator phenomenon, but several other factors caused it. The participants said that they did not feel like they were walking on an escalator because they could not see their feet due to the narrow field of view.

Result
We hid the part of the view of the participants by black felt as shown in Figure 12 because the viewing angle of the HoloLens screen is narrow, and the texture of the escalator is not displayed when the participants looks at their own feet. The stairs on which the escalator was displayed were visible only in a narrower range than usual. The narrow field of view caused by the device was one of reason why they did not feel the broken escalator phenomenon. As the result of Figure 15 is also low, it seems that the fact that the escalator does not look like a real escalator is the reason why the escalator effect did not occur.
The texture of the escalator was rough due to the resolution of HoloLens. Based on the AR texture display state in this case, it is not appropriate to consider the hypothesis.

Experiment 2-2: Moving walkway 4.3.1. Experimental Method
We display a moving walkway as AR in a corridor. We use two types of moving walkways: AR texture that moves in the forward direction and AR texture that moves in the reverse direction. The participants walk on each AR texture. We investigate whether the participants' walking speeds change. We hypothesize that when the forward-moving walkway AR texture is displayed, the participants will feel as if they are walking faster, which will increase their walking speed. On the other hand, we hypothesize that when the reverse-moving walkway AR texture is displayed, the participants will feel it is difficult to walk, the walking speed will decrease.
Nojima et al. have implemented a system that changes the user's perception of walking speed by displaying an optical flow using light in the user's peripheral vision [35]. Tanikawa et al. set up several vection field displays around the user's head to change the direction of walking [36]. In this way, by displaying a moving sidewalk as AR, the user will feel as if his/her walking speed has changed by obtaining optical flow and feeling vection. Figure 16 shows a corridor with the AR moving walkway through the HoloLens. The range of the AR texture is shown in Figure 17. The AR texture is displayed in an area 14 m long and 1.2 m wide. The participant walks from two meters in front of the AR texture until the AR texture is no longer displayed. We measure the time taken by the participant to walk eight meters (from line-α to line-β) from a point four meters away from the starting point. After the walk, the participant answers two questionnaires After walking in the corridor with the forward-moving walkway, they answer: "I felt that I walked faster than without the AR texture". After walking in the corridor with the reverse-moving walkway, they answer: "I felt that I walked slower than without the AR texture". Aother questionnaire is "The AR texture looks like a real moving walkway".  The participants were seven people (five males and two females) between the ages of 21 and 24. They were undergraduate and graduate students. They were visually healthy.

Result
The average time that took the participants to walk eight meters in each condition is shown in Figure 18. The results of ANOVA in a one-factor within-participant design with texture as a factor showed no significant differences in the walking time (F(2, 12) = 2.222, p = 0.151, η p 2 = 0.27, 1 − β = 1). It is possible that there was no difference because the distance was only eight meters, but it was found that the AR texture did not affect the walking speed when the distance was approximately eight meters.  Figure 19 shows the questionnaire results regarding the perceived walking speed: whether the participants felt as if they were walking fast when walking on the texture of the forward-moving walkway or walking slow when walking on the texture of the reverse-moving walkway. The median values for both results were as high as four. In the case of the reverse-moving walkway, the participants said that the movement of the texture was easy to understand. In the case of the forward-moving walkway, only one participant answered "2", and she was the only one who said that she felt slower. She stated that she felt like she was walking slower because of her fear of walking on moving textures. The two participants who answered "3" said that they did not feel as if they were walking on the moving walkway because they could not see their feet due to the narrow view of the HoloLens. These two participants also commented that they did not feel the texture moving because the walking speed and the moving walkway speed were the same. If the moving walkway was moving faster than the walking speed, it might have affected the perception of walking speed. Forward-moving walkway Reverse-moving walkway Figure 19. Results of the question, "I felt I walked faster/slower than without the AR texture." We hypothesized that displaying the moving walkway as an AR texture would change the participants' walking speed, but there was no difference at a short distance of eight meters as in this case. On the other hand, the participants felt that their walking speed increased or decreased when the moving walkway texture was displayed. Figure 20 shows the result of the questionnaire on whether the moving walk texture looked like a real moving walk. The median value was as high as four. The participants commented that the texture looked more authentic because it had movement. Compared to Experiment 2-1, it was considered that the texture with movement was more likely to be perceived as authentic. Vection is a perceptual sense of movement based on optical flow, and the use of vection controls human walking [37]. It has been confirmed that vection can also be caused by projected images [38]. In Furukawa et al.'s method, the pedestrian's course is altered by presenting a vection field on the ground [32]. In Experiment 2-3, we investigate whether the course of pedestrians is also changed by the display of AR textures. We hypothesize that participants will change their direction of walking to the direction in which the stripes displayed as AR textures move.
The AR texture is displayed as shown in Figure 21, and the stripes move to the right, with two speeds-slow and fast. We experiment indoors. The area where the participant walks and the AR texture is displayed is as shown in Figure 22. The goal point is five meters straight ahead from the start point. The goal line is the line perpendicular to the direction of travel. The participant is instructed to walk straight from the start point to the goal point. The participant's position at the time of reaching the goal line is defined as the arrival point. The participant wears HoloLens and walks in a random order under three conditions: no AR display, slow moving AR texture, and fast moving AR texture. The textures are displayed in a size of eight meters in length and six meters in width, starting one meter before the start point. The evaluation is done by measuring the distance from the goal point to the arrival point with the right side of the direction of travel as the positive direction.
The participants (Participant-A-Participant-I) were nine people (seven males and two females) between the ages of 21 and 25. They were undergraduate and graduate students. They were visually healthy.  Figure 23 shows the average distance between the goal and arrival points for each texture display. The error bars indicate the standard error. An ANOVA was conducted for the distance between the goal and arrival points in a one-factor within-participant design. The ANOVA showed no significant difference (F(2, 16) = 0.453, p = 0.643, η p 2 = 0.054, 1 − β = 0.236). We hypothesized that participants would change their direction of walking to the direction in which the stripes displayed as AR textures moved, but this was not the case. As the results, the standard error values are large, suggesting that there are differences between individuals. The results for each individual are shown in Figure 24. The difference in distance between the results without AR display and with AR display is also shown in Figure 25. Seven of the nine participants thought that they were being swept to the right. However, as can be seen in Figures 24 and 25, only some of the participants were moving in the right direction. We thought that the faster the stripes moved, the more they moved to the right, but only Participant-B showed such a result. Participants D, G, and I commented that they did not move to the right because they were about to be swept to the right, so they tried to correct to the left. In fact, these three participants did move to the left.  By changing the texture with AR, we were able to influence people to walk in the direction of the flowing stripes, or to have difficulty walking. However, we found that it was difficult to give the desired effect to all people. Since the viewing angle of HoloLens display was very narrow, the range of the AR texture was also narrow. It is important to conduct experiments in different environments using AR display devices with a wider viewing angle in the future.

Consideration
In this paper's experiments, we investigated how the user's gait was affected by displaying AR objects and changing the textures. In the first experiment, we examined how the user's walking path was affected by the display of AR objects. In the second experiment, we investigated how the user's walkability was affected by the appearance of a change in floor texture by AR technology. In both experiments, we could not obtain sufficient results due to the narrow viewing angle and low resolution of HoloLens. However, we confirmed that there were some effects on the participants' walking in both experiments. We believe that there will be a stronger effect when we use a more advanced AR display device. This is a good finding that we should continue to investigate using other AR display devices in the future.

Consideration for the First Experiment
People perceive virtual objects displayed in AR as being closer to them than real objects [26,27]. Therefore, we hypothesized that participants would walk widely while avoiding virtual objects when they are displayed. As a result, when a car or a white cube was displayed as a virtual object in AR, participants walked more widely away from the object than from the real object, a signboard.
We hypothesized that participants would leave more widely when the object (car) that did not blend with the surroundings was displayed than when the object (white cube) blended with the surroundings. However, when either object was displayed, the participants walked along a similar path. Because of the low resolution of HoloLens, the car was displayed in a rough image, and the car object did not fully blend with the surroundings. In addition, although there was a difference between the car and the white cube, the car was also a virtual object, so the participants' understanding of the distance to the object was the same as that of the white cube.
We confirmed that the objects that blended with the surroundings did not cause discomfort to the participants and make it difficult for them to walk. When providing instruction or advertising that pedestrians do not need to follow, it is preferable to use objects that blend in with their surroundings. The objects' selection needs to be carefully considered so that the objects can blend into their surroundings yet still allow pedestrians to understand the meaning that the object has or to subconsciously perform the action that the object's installer has targeted. In cases where the pedestrians must follow the instruction, we should use objects which ensure that the pedestrians can certainly understand the meaning, such as sentences.
In this paper, our experiment was conducted in a parking lot. In the future, we need to experiment in different situations to examine how the displayed virtual objects blend with the surroundings.

Consideration for the Second Experiment
In the second experiment, we investigated the effect of changing the floor texture by AR display on pedestrians. Still, it was difficult to conduct the experiment on stairs due to the narrow viewing angle of the HoloLens. We hypothesized that we could give the participants a broken escalator effect by displaying an escalator's texture on the stairs. After that, we could make it difficult for them to climb the stairs. However, the HoloLens could not make the participants believe that there was an escalator because the display area of the HoloLens was too small to show only a part of the texture. Also, due to the low resolution of the HoloLens, the escalator was shown in a rough image. The difference in the feel of the feet between the stairs and the escalator may be one of the reasons, but this could not be verified from the experiment.
By displaying the moving walkway texture, we found that the AR texture displayed on the floor could influence the pedestrian's gait. We hypothesized that displaying the moving walkway would change the walking speed of the participants. However, we did not find such a change in the 8-meter walk experiment. We hypothesized that the display of a moving walkway would change the participants' perception of walking speed. In the experiments, we found that participants felt as if they were walking faster when the texture moved in the direction of walking, and felt as if they were walking slower when the texture moved in the opposite direction. One might think the degree of fatigue will also be changed, but this has not been verified in this paper's experiments. It is possible that the degree of fatigue may change when the participants walked longer distances, and we found it worthwhile to conduct further investigations in the future.
We hypothesized that when the AR texture displayed stripes moving at right angles to the direction of walking, participants would change their walking path to the direction in which the stripes moved. However, the results of our experiments showed that some participants' gait flowed in the same direction as the stripes, while others rebelled against going in the same direction. It is difficult to unify the changes in the direction of travel of the pedestrians by displaying AR textures, but we found that it can make the pedestrians difficult to walk straight.

Limitation and Future Works
In the experiment, we were not able to conduct sufficient experiments due to the narrow viewing angle and low resolution of the AR display device. Due to the narrow viewing angle of the HoloLens, participants could not see the virtual objects in most of their peripheral vision. Due to the low resolution of the HoloLens, participants could not see the displayed virtual objects and AR textures as if they were real. The number of participants was also small. However, we were able to confirm that even such specifications affect the user, suggesting that more detailed experiments should be conducted in the future using more sophisticated AR display devices. By comparing the results obtained with the high-precision device with the present results of this paper, we can clarify the differences due to precision. We will continue our research to ensure that users can use AR display devices safely in their daily lives.

Conclusions
In this paper, we investigated how AR-displayed objects/textures affect the user's walking, assuming that AR display devices are used in daily life. In the first experiment, we investigated how the user's walking path was changed by displaying AR objects. The results showed that the object blended in with the surroundings did not cause discomfort or make walking difficult for the participants. We found that the object should be displayed in a way that the user can surely understand its meaning, such as text, when the user needs to follow instructions. In the second experiment, we investigated the effect of changing the floor's texture by AR on the user's walkability. Due to the narrow viewing angle of the AR display device, we were not able to conduct sufficient experiments to see how the texture displayed on the stairs would be affected. We found that the walking was affected when the AR display device displayed the moving walkway and the moving stripes.  Institutional Review Board Statement: The study was approved by the Ethics Committee of Kobe University (protocol code 01-24).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the fact that we have not received permission from the participants to publish. If you contact the corresponding author, we will contact the participant individually.

Conflicts of Interest:
The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.